[kernel] r22085 - in dists/squeeze-security/linux-2.6/debian: . patches/bugfix/all/stable patches/series
Raphaël Hertzog
hertzog at moszumanska.debian.org
Tue Nov 25 16:37:49 UTC 2014
Author: hertzog
Date: Tue Nov 25 16:37:48 2014
New Revision: 22085
Log:
Add patches for upstream stable releases 2.6.32.61 to 2.6.32.63
Disable the patches included in 2.6.32.61. This must still be done
for 2.6.32.62 to 2.6.32.64 (Holger will take care of it).
There's a TODO left in the debian/patches/series/48squeeze9 about
a patch that probably needs to be updated.
Added:
dists/squeeze-security/linux-2.6/debian/patches/bugfix/all/stable/2.6.32.61.patch
dists/squeeze-security/linux-2.6/debian/patches/bugfix/all/stable/2.6.32.62.patch
dists/squeeze-security/linux-2.6/debian/patches/bugfix/all/stable/2.6.32.63.patch
Modified:
dists/squeeze-security/linux-2.6/debian/changelog
dists/squeeze-security/linux-2.6/debian/patches/series/48squeeze9
Modified: dists/squeeze-security/linux-2.6/debian/changelog
==============================================================================
--- dists/squeeze-security/linux-2.6/debian/changelog Tue Nov 25 13:48:40 2014 (r22084)
+++ dists/squeeze-security/linux-2.6/debian/changelog Tue Nov 25 16:37:48 2014 (r22085)
@@ -1,5 +1,425 @@
linux-2.6 (2.6.32-48squeeze9) UNRELEASED; urgency=medium
+ [ Raphaël Hertzog ]
+ * The following upstream releases include many security fixes which
+ were already shipped in previous Debian releases.
+ * Add stable release 2.6.32.61:
+ - Revert "pcdp: use early_ioremap/early_iounmap to access pcdp table"
+ - Revert "block: improve queue_should_plug() by looking at IO depths"
+ - 2.6.32.y: timekeeping: Fix nohz issue with commit
+ 61b76840ddee647c0c223365378c3f394355b7d7
+ - clockevents: Don't allow dummy broadcast timers
+ - posix-cpu-timers: Fix nanosleep task_struct leak
+ - timer: Don't reinitialize the cpu base lock during CPU_UP_PREPARE
+ - tick: Cleanup NOHZ per cpu data on cpu down
+ - kbuild: Fix gcc -x syntax
+ - gen_init_cpio: avoid stack overflow when expanding
+ - usermodehelper: introduce umh_complete(sub_info)
+ - usermodehelper: implement UMH_KILLABLE
+ - usermodehelper: ____call_usermodehelper() doesn't need do_exit()
+ - kmod: introduce call_modprobe() helper
+ - kmod: make __request_module() killable
+ - exec: do not leave bprm->interp on stack
+ - exec: use -ELOOP for max recursion depth
+ - signal: always clear sa_restorer on execve
+ - ptrace: ptrace_resume() shouldn't wake up !TASK_TRACED thread
+ - ptrace: introduce signal_wake_up_state() and ptrace_signal_wake_up()
+ - ptrace: ensure arch_ptrace/ptrace_request can never race with SIGKILL
+ - ptrace: Fix ptrace when task is in task_is_stopped() state
+ - kernel/signal.c: stop info leak via the tkill and the tgkill syscalls
+ - signal: Define __ARCH_HAS_SA_RESTORER so we know whether to clear
+ sa_restorer
+ - kernel/signal.c: use __ARCH_HAS_SA_RESTORER instead of SA_RESTORER
+ - wake_up_process() should be never used to wakeup a TASK_STOPPED/TRACED
+ task
+ - coredump: prevent double-free on an error path in core dumper
+ - kernel/sys.c: call disable_nonboot_cpus() in kernel_restart()
+ - ring-buffer: Fix race between integrity check and readers
+ - genalloc: stop crashing the system when destroying a pool
+ - kernel/resource.c: fix stack overflow in __reserve_region_with_split()
+ - Driver core: treat unregistered bus_types as having no devices
+ - cgroup: remove incorrect dget/dput() pair in cgroup_create_dir()
+ - Fix a dead loop in async_synchronize_full()
+ - tracing: Don't call page_to_pfn() if page is NULL
+ - tracing: Fix double free when function profile init failed
+ - hugetlb: fix resv_map leak in error path
+ - mm: fix vma_resv_map() NULL pointer
+ - mm: Fix PageHead when !CONFIG_PAGEFLAGS_EXTENDED
+ - mm: bugfix: set current->reclaim_state to NULL while returning from
+ kswapd()
+ - mm: fix invalidate_complete_page2() lock ordering
+ - mempolicy: fix a race in shared_policy_replace()
+ - ALSA: hda - More ALC663 fixes and support of compatible chips
+ - ALSA: hda - Add a pin-fix for FSC Amilo Pi1505
+ - ALSA: seq: Fix missing error handling in snd_seq_timer_open()
+ - ALSA: ac97 - Fix missing NULL check in snd_ac97_cvol_new()
+ - x86, ioapic: initialize nr_ioapic_registers early in mp_register_ioapic()
+ - x86: Don't use the EFI reboot method by default
+ - x86, random: make ARCH_RANDOM prompt if EMBEDDED, not EXPERT
+ - x86/xen: don't assume %ds is usable in xen_iret for 32-bit PVOPS.
+ - x86/msr: Add capabilities check
+ - x86/mm: Check if PUD is large when validating a kernel address
+ - x86, mm, paravirt: Fix vmalloc_fault oops during lazy MMU updates
+ - xen/bootup: allow read_tscp call for Xen PV guests.
+ - xen/bootup: allow {read|write}_cr8 pvops call.
+ - KVM: x86: fix for buffer overflow in handling of MSR_KVM_SYSTEM_TIME
+ (CVE-2013-1796)
+ - KVM: x86: relax MSR_KVM_SYSTEM_TIME alignment check
+ - KVM: Fix bounds checking in ioapic indirect register reads (CVE-2013-1798)
+ - KVM: x86: invalid opcode oops on SET_SREGS with OSXSAVE bit set
+ (CVE-2012-4461)
+ - MCE: Fix vm86 handling for 32bit mce handler
+ - ACPI / cpuidle: Fix NULL pointer issues when cpuidle is disabled
+ - alpha: Add irongate_io to PCI bus resources
+ - PARISC: fix user-triggerable panic on parisc
+ - serial: 8250, increase PASS_LIMIT
+ - drivers/char/ipmi: memcpy, need additional 2 bytes to avoid memory
+ overflow
+ - w1: fix oops when w1_search is called from netlink connector
+ - staging: comedi: ni_labpc: correct differential channel sequence for AI
+ commands
+ - staging: comedi: ni_labpc: set up command4 register *after* command3
+ - staging: comedi: comedi_test: fix race when cancelling command
+ - staging: comedi: fix memory leak for saved channel list
+ - staging: comedi: s626: don't dereference insn->data
+ - staging: comedi: jr3_pci: fix iomem dereference
+ - staging: comedi: don't dereference user memory for INSN_INTTRIG
+ - staging: comedi: check s->async for poll(), read() and write()
+ - staging: comedi: das08: Correct AO output for das08jr-16-ao
+ - staging: vt6656: [BUG] out of bound array reference in RFbSetPower.
+ - libata: fix Null pointer dereference on disk error
+ - scsi: Silence unnecessary warnings about ioctl to partition
+ - scsi: use __uX types for headers exported to user space
+ - fix crash in scsi_dispatch_cmd()
+ - SCSI: bnx2i: Fixed NULL ptr deference for 1G bnx2 Linux iSCSI offload
+ - keys: fix race with concurrent install_user_keyrings()
+ - crypto: cryptd - disable softirqs in cryptd_queue_worker to prevent data
+ corruption
+ - xfrm_user: fix info leak in copy_to_user_state()
+ - xfrm_user: fix info leak in copy_to_user_policy()
+ - xfrm_user: fix info leak in copy_to_user_tmpl()
+ - xfrm_user: return error pointer instead of NULL
+ - xfrm_user: return error pointer instead of NULL #2
+ - r8169: correct settings of rtl8102e.
+ - r8169: remove the obsolete and incorrect AMD workaround
+ - r8169: Add support for D-Link 530T rev C1 (Kernel Bug 38862)
+ - r8169: incorrect identifier for a 8168dp
+ - b43legacy: Fix crash on unload when firmware not available
+ - tg3: Avoid null pointer dereference in tg3_interrupt in netconsole mode
+ - IPoIB: Fix use-after-free of multicast object
+ - telephony: ijx: buffer overflow in ixj_write_cid()
+ - Bluetooth: Fix incorrect strncpy() in hidp_setup_hid()
+ - Bluetooth: HCI - Fix info leak in getsockopt(HCI_FILTER)
+ - Bluetooth: RFCOMM - Fix info leak via getsockname()
+ - Bluetooth: RFCOMM - Fix missing msg_namelen update in
+ rfcomm_sock_recvmsg()
+ - Bluetooth: L2CAP - Fix info leak via getsockname()
+ - Bluetooth: fix possible info leak in bt_sock_recvmsg()
+ - xhci: Make handover code more robust
+ - USB: EHCI: go back to using the system clock for QH unlinks
+ - USB: whiteheat: fix memory leak in error path
+ - USB: serial: Fix memory leak in sierra_release()
+ - USB: mos7840: fix urb leak at release
+ - USB: mos7840: fix port-device leak in error path
+ - USB: garmin_gps: fix memory leak on disconnect
+ - USB: io_ti: Fix NULL dereference in chase_port()
+ - USB: cdc-wdm: fix buffer overflow
+ - USB: serial: ftdi_sio: Handle the old_termios == 0 case e.g.
+ uart_resume_port()
+ - USB: ftdi_sio: Quiet sparse noise about using plain integer was NULL
+ pointer
+ - epoll: prevent missed events on EPOLL_CTL_MOD
+ - fs/compat_ioctl.c: VIDEO_SET_SPU_PALETTE missing error check
+ - fs/fscache/stats.c: fix memory leak
+ - sysfs: sysfs_pathname/sysfs_add_one: Use strlcat() instead of strcat()
+ - tmpfs: fix use-after-free of mempolicy object
+ - jbd: Delay discarding buffers in journal_unmap_buffer
+ - jbd: Fix assertion failure in commit code due to lacking transaction
+ credits
+ - jbd: Fix lock ordering bug in journal_unmap_buffer()
+ - ext4: Fix fs corruption when make_indexed_dir() fails
+ - ext4: don't dereference null pointer when make_indexed_dir() fails
+ - ext4: Fix max file size and logical block counting of extent format file
+ - ext4: fix memory leak in ext4_xattr_set_acl()'s error path
+ - ext4: online defrag is not supported for journaled files
+ - ext4: always set i_op in ext4_mknod()
+ - ext4: fix fdatasync() for files with only i_size changes
+ - ext4: lock i_mutex when truncating orphan inodes
+ - ext4: fix race in ext4_mb_add_n_trim()
+ - ext4: limit group search loop for non-extent files
+ - CVE-2012-4508 kernel: ext4: AIO vs fallocate stale data exposure
+ - ext4: make orphan functions be no-op in no-journal mode
+ - ext4: avoid hang when mounting non-journal filesystems with orphan list
+ - udf: fix memory leak while allocating blocks during write
+ - udf: avoid info leak on export
+ - udf: Fix bitmap overflow on large filesystems with small block size
+ - fs/cifs/cifs_dfs_ref.c: fix potential memory leakage
+ - isofs: avoid info leak on export
+ - fat: Fix stat->f_namelen
+ - NLS: improve UTF8 -> UTF16 string conversion routine
+ - hfsplus: fix potential overflow in hfsplus_file_truncate()
+ - btrfs: use rcu_barrier() to wait for bdev puts at unmount
+ - kernel panic when mount NFSv4
+ - nfsd4: fix oops on unusual readlike compound
+ - net/core: Fix potential memory leak in dev_set_alias()
+ - net: reduce net_rx_action() latency to 2 HZ
+ - softirq: reduce latencies
+ - af_packet: remove BUG statement in tpacket_destruct_skb
+ - bridge: set priority of STP packets
+ - bonding: Fix slave selection bug.
+ - ipv4: check rt_genid in dst_check
+ - net_sched: gact: Fix potential panic in tcf_gact().
+ - net: sched: integer overflow fix
+ - net: prevent setting ttl=0 via IP_TTL
+ - net: fix divide by zero in tcp algorithm illinois
+ - net: guard tcp_set_keepalive() to tcp sockets
+ Fixes CVE-2012-6657
+ - net: fix info leak in compat dev_ifconf()
+ - inet: add RCU protection to inet->opt
+ - tcp: allow splice() to build full TSO packets
+ - tcp: fix MSG_SENDPAGE_NOTLAST logic
+ - tcp: preserve ACK clocking in TSO
+ - unix: fix a race condition in unix_release()
+ - dcbnl: fix various netlink info leaks
+ - sctp: fix memory leak in sctp_datamsg_from_user() when copy from user
+ space fails
+ - net: sctp: sctp_setsockopt_auth_key: use kzfree instead of kfree
+ - net: sctp: sctp_endpoint_free: zero out secret key data
+ - net: sctp: sctp_auth_key_put: use kzfree instead of kfree
+ - ipv6: discard overlapping fragment
+ - ipv6: make fragment identifications less predictable
+ - netfilter: nf_ct_ipv4: packets with wrong ihl are invalid
+ - ipvs: allow transmit of GRO aggregated skbs
+ - ipvs: IPv6 MTU checking cleanup and bugfix
+ - ipvs: fix info leak in getsockopt(IP_VS_SO_GET_TIMEOUT)
+ - atm: update msg_namelen in vcc_recvmsg()
+ - atm: fix info leak via getsockname()
+ - atm: fix info leak in getsockopt(SO_ATMPVC)
+ - ax25: fix info leak via msg_name in ax25_recvmsg()
+ - isdnloop: fix and simplify isdnloop_init()
+ - iucv: Fix missing msg_namelen update in iucv_sock_recvmsg()
+ - llc: fix info leak via getsockname()
+ - llc: Fix missing msg_namelen update in llc_ui_recvmsg()
+ - rds: set correct msg_namelen
+ - rose: fix info leak via msg_name in rose_recvmsg()
+ - irda: Fix missing msg_namelen update in irda_recvmsg_dgram()
+ - tipc: fix info leaks via msg_name in recv_msg/recv_stream
+ - mpt2sas: Send default descriptor for RAID pass through in mpt2ctl
+ - x86, ptrace: fix build breakage with gcc 4.7
+ * Add stable release 2.6.32.62:
+ - scsi: fix missing include linux/types.h in scsi_netlink.h
+ - Fix lockup related to stop_machine being stuck in __do_softirq.
+ - Revert "x86, ptrace: fix build breakage with gcc 4.7"
+ - x86, ptrace: fix build breakage with gcc 4.7 (second try)
+ - ipvs: fix CHECKSUM_PARTIAL for TCP, UDP
+ - intel-iommu: Flush unmaps at domain_exit
+ - staging: comedi: ni_65xx: (bug fix) confine insn_bits to one subdevice
+ - kernel/kmod.c: check for NULL in call_usermodehelper_exec()
+ - cciss: fix info leak in cciss_ioctl32_passthru()
+ - cpqarray: fix info leak in ida_locked_ioctl()
+ - drivers/cdrom/cdrom.c: use kzalloc() for failing hardware
+ - sctp: deal with multiple COOKIE_ECHO chunks
+ - sctp: Use correct sideffect command in duplicate cookie handling
+ - ipv6: ip6_sk_dst_check() must not assume ipv6 dst
+ - af_key: fix info leaks in notify messages
+ - af_key: initialize satype in key_notify_policy_flush()
+ - block: do not pass disk names as format strings
+ - b43: stop format string leaking into error msgs
+ - HID: validate HID report id size
+ - HID: zeroplus: validate output report details
+ - HID: pantherlord: validate output report details
+ - HID: LG: validate HID output report details
+ - HID: check for NULL field when setting values
+ - HID: provide a helper for validating hid reports
+ - crypto: api - Fix race condition in larval lookup
+ - ipv6: tcp: fix panic in SYN processing
+ - tcp: must unclone packets before mangling them
+ - net: do not call sock_put() on TIMEWAIT sockets
+ - net: heap overflow in __audit_sockaddr()
+ - proc connector: fix info leaks
+ - can: dev: fix nlmsg size calculation in can_get_size()
+ - net: vlan: fix nlmsg size calculation in vlan_get_size()
+ - farsync: fix info leak in ioctl
+ - connector: use nlmsg_len() to check message length
+ - net: dst: provide accessor function to dst->xfrm
+ - sctp: Use software crc32 checksum when xfrm transform will happen.
+ - sctp: Perform software checksum if packet has to be fragmented.
+ - wanxl: fix info leak in ioctl
+ - davinci_emac.c: Fix IFF_ALLMULTI setup
+ - resubmit bridge: fix message_age_timer calculation
+ - ipv6 mcast: use in6_dev_put in timer handlers instead of __in6_dev_put
+ - ipv4 igmp: use in_dev_put in timer handlers instead of __in_dev_put
+ - dm9601: fix IFF_ALLMULTI handling
+ - bonding: Fix broken promiscuity reference counting issue
+ - ll_temac: Reset dma descriptors indexes on ndo_open
+ - tcp: fix tcp_md5_hash_skb_data()
+ - ipv6: fix possible crashes in ip6_cork_release()
+ - ip_tunnel: fix kernel panic with icmp_dest_unreach
+ - net: sctp: fix NULL pointer dereference in socket destruction
+ - packet: packet_getname_spkt: make sure string is always 0-terminated
+ - neighbour: fix a race in neigh_destroy()
+ - net: Swap ver and type in pppoe_hdr
+ - sunvnet: vnet_port_remove must call unregister_netdev
+ - ifb: fix rcu_sched self-detected stalls
+ - dummy: fix oops when loading the dummy failed
+ - ifb: fix oops when loading the ifb failed
+ - vlan: fix a race in egress prio management
+ - arcnet: cleanup sizeof parameter
+ - sysctl net: Keep tcp_syn_retries inside the boundary
+ - sctp: fully initialize sctp_outq in sctp_outq_init
+ - net_sched: Fix stack info leak in cbq_dump_wrr().
+ - af_key: more info leaks in pfkey messages
+ - net_sched: info leak in atm_tc_dump_class()
+ - htb: fix sign extension bug
+ - net: check net.core.somaxconn sysctl values
+ - tcp: cubic: fix bug in bictcp_acked()
+ - ipv6: don't stop backtracking in fib6_lookup_1 if subtree does not match
+ - ipv6: remove max_addresses check from ipv6_create_tempaddr
+ - ipv6: drop packets with multiple fragmentation headers
+ - ipv6: Don't depend on per socket memory for neighbour discovery messages
+ - ICMPv6: treat dest unreachable codes 5 and 6 as EACCES, not EPROTO
+ - tipc: fix lockdep warning during bearer initialization
+ - net: Fix "ip rule delete table 256"
+ - ipv6: use rt6_get_dflt_router to get default router in rt6_route_rcv
+ - random32: fix off-by-one in seeding requirement
+ - bonding: fix two race conditions in bond_store_updelay/downdelay
+ - isdnloop: use strlcpy() instead of strcpy()
+ - ipv4: fix possible seqlock deadlock
+ - inet: prevent leakage of uninitialized memory to user in recv syscalls
+ - net: rework recvmsg handler msg_name and msg_namelen logic
+ - net: add BUG_ON if kernel advertises msg_namelen > sizeof(struct
+ sockaddr_storage)
+ - inet: fix addr_len/msg->msg_namelen assignment in recv_error and rxpmtu
+ functions
+ - net: clamp ->msg_namelen instead of returning an error
+ - ipv6: fix leaking uninitialized port number of offender sockaddr
+ - atm: idt77252: fix dev refcnt leak
+ - net: core: Always propagate flag changes to interfaces
+ - bridge: flush br's address entry in fdb when remove the bridge dev
+ - inet: fix possible seqlock deadlocks
+ - ipv6: fix possible seqlock deadlock in ip6_finish_output2
+ - {pktgen, xfrm} Update IPv4 header total len and checksum after
+ tranformation
+ - net: drop_monitor: fix the value of maxattr
+ - net: unix: allow bind to fail on mutex lock
+ - drivers/net/hamradio: Integer overflow in hdlcdrv_ioctl()
+ - hamradio/yam: fix info leak in ioctl
+ - rds: prevent dereference of a NULL device
+ - net: rose: restore old recvmsg behavior
+ - net: llc: fix use after free in llc_ui_recvmsg
+ - inet_diag: fix inet_diag_dump_icsk() timewait socket state logic
+ - net: fix 'ip rule' iif/oif device rename
+ - tg3: Fix deadlock in tg3_change_mtu()
+ - bonding: 802.3ad: make aggregator_identifier bond-private
+ - net: sctp: fix sctp_connectx abi for ia32 emulation/compat mode
+ - virtio-net: alloc big buffers also when guest can receive UFO
+ - tg3: Don't check undefined error bits in RXBD
+ - net: sctp: fix sctp_sf_do_5_1D_ce to verify if we/peer is AUTH capable
+ - net: sctp: fix skb leakage in COOKIE ECHO path of chunk->auth_chunk
+ - net: socket: error on a negative msg_namelen
+ - netlink: don't compare the nul-termination in nla_strcmp
+ - isdnloop: several buffer overflows
+ - rds: prevent dereference of a NULL device in rds_iw_laddr_check
+ - isdnloop: Validate NUL-terminated strings from user.
+ - sctp: unbalanced rcu lock in ip_queue_xmit()
+ - aacraid: prevent invalid pointer dereference
+ - ipv6: udp packets following an UFO enqueued packet need also be handled by
+ UFO
+ - inet: fix possible memory corruption with UDP_CORK and UFO
+ - vm: add vm_iomap_memory() helper function
+ - Fix a few incorrectly checked [io_]remap_pfn_range() calls
+ - libertas: potential oops in debugfs
+ - x86, fpu, amd: Clear exceptions in AMD FXSAVE workaround
+ - gianfar: disable TX vlan based on kernel 2.6.x
+ - powernow-k6: set transition latency value so ondemand governor can be used
+ - powernow-k6: disable cache when changing frequency
+ - powernow-k6: correctly initialize default parameters
+ - powernow-k6: reorder frequencies
+ - tcp: fix tcp_trim_head() to adjust segment count with skb MSS
+ - tcp_cubic: limit delayed_ack ratio to prevent divide error
+ - tcp_cubic: fix the range of delayed_ack
+ - n_tty: Fix n_tty_write crash when echoing in raw mode
+ - exec/ptrace: fix get_dumpable() incorrect tests
+ - ipv6: call udp_push_pending_frames when uncorking a socket with AF_INET
+ pending data
+ - dm snapshot: fix data corruption
+ - crypto: ansi_cprng - Fix off by one error in non-block size request
+ - uml: check length in exitcode_proc_write()
+ - KVM: Improve create VCPU parameter (CVE-2013-4587)
+ - KVM: x86: Fix potential divide by 0 in lapic (CVE-2013-6367)
+ - qeth: avoid buffer overflow in snmp ioctl
+ - xfs: underflow bug in xfs_attrlist_by_handle()
+ - aacraid: missing capable() check in compat ioctl
+ - SELinux: Fix kernel BUG on empty security contexts.
+ - s390: fix kernel crash due to linkage stack instructions
+ - netfilter: nf_conntrack_dccp: fix skb_header_pointer API usages
+ - floppy: ignore kernel-only members in FDRAWCMD ioctl input
+ - floppy: don't write kernel-only members to FDRAWCMD ioctl output
+ * Add stable release 2.6.32.63:
+ - ethtool: Report link-down while interface is down
+ - futex: Add another early deadlock detection check
+ - futex: Prevent attaching to kernel threads
+ - futex-prevent-requeue-pi-on-same-futex.patch futex: Forbid uaddr == uaddr2
+ in futex_requeue(..., requeue_pi=1)
+ - futex: Validate atomic acquisition in futex_lock_pi_atomic()
+ - futex: Always cleanup owner tid in unlock_pi
+ - futex: Make lookup_pi_state more robust
+ - auditsc: audit_krule mask accesses need bounds checking
+ - net: fix regression introduced in 2.6.32.62 by sysctl fixes
+ * Add stable release 2.6.32.64:
+ - x86_32, entry: Do syscall exit work on badsys (CVE-2014-4508)
+ - x86_32, entry: Store badsys error code in %eax
+ - x86_32, entry: Clean up sysenter_badsys declaration
+ - MIPS: Cleanup flags in syscall flags handlers.
+ - MIPS: asm: thread_info: Add _TIF_SECCOMP flag
+ - fix autofs/afs/etc. magic mountpoint breakage
+ - ALSA: control: Make sure that id->index does not overflow
+ - ALSA: control: Handle numid overflow
+ - sctp: Fix sk_ack_backlog wrap-around problem
+ - mm: try_to_unmap_cluster() should lock_page() before mlocking
+ - filter: prevent nla extensions to peek beyond the end of the message
+ - ALSA: control: Protect user controls against concurrent access
+ - ptrace,x86: force IRET path after a ptrace_stop()
+ - sym53c8xx_2: Set DID_REQUEUE return code when aborting squeue
+ - tcp: fix tcp_match_skb_to_sack() for unaligned SACK at end of an skb
+ - igmp: fix the problem when mc leave group
+ - appletalk: Fix socket referencing in skb
+ - net: sctp: fix information leaks in ulpevent layer
+ - sunvnet: clean up objects created in vnet_new() on vnet_exit()
+ - ipv4: fix buffer overflow in ip_options_compile()
+ - net: sctp: inherit auth_capable on INIT collisions
+ Fixes CVE-2014-5077
+ - net: sendmsg: fix NULL pointer dereference
+ - tcp: Fix integer-overflows in TCP veno
+ - tcp: Fix integer-overflow in TCP vegas
+ - macvlan: Initialize vlan_features to turn on offload support.
+ - net: Correctly set segment mac_len in skb_segment().
+ - iovec: make sure the caller actually wants anything in memcpy_fromiovecend
+ - sctp: fix possible seqlock seadlock in sctp_packet_transmit()
+ - Revert "nfsd: correctly handle return value from nfsd_map_name_to_*"
+ - dm crypt: fix access beyond the end of allocated space
+ - gianfar: disable vlan tag insertion by default
+ - USB: kobil_sct: fix non-atomic allocation in write path
+ - fix misuses of f_count() in ppp and netlink
+ - net: sctp: fix skb_over_panic when receiving malformed ASCONF chunks
+ - tty: Fix high cpu load if tty is unreleaseable
+ - netfilter: nf_log: account for size of NLMSG_DONE attribute
+ - netfilter: nfnetlink_log: fix maximum packet length logged to userspace
+ - ring-buffer: Always reset iterator to reader page
+ - md/raid6: avoid data corruption during recovery of double-degraded RAID6
+ - net: pppoe: use correct channel MTU when using Multilink PPP
+ - ARM: 7668/1: fix memset-related crashes caused by recent GCC (4.7.2)
+ optimizations
+ - ARM: 7670/1: fix the memset fix
+ - lib/lzo: Update LZO compression to current upstream version
+ - Documentation: lzo: document part of the encoding
+ - lzo: check for length overrun in variable length encoding.
+ - USB: add new zte 3g-dongle's pid to option.c
+ - futex: Unlock hb->lock in futex_wait_requeue_pi() error path
+ - isofs: Fix unbounded recursion when processing relocated directories
+ Fixes CVE-2014-5471 CVE-2014-5472
+ - sctp: not send SCTP_PEER_ADDR_CHANGE notifications with failed probe
+
[ Holger Levsen ]
* New upstream stable release 2.6.32.64, see
https://lkml.org/lkml/2014/11/23/181 for more information.
Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/all/stable/2.6.32.61.patch
==============================================================================
--- /dev/null 00:00:00 1970 (empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/all/stable/2.6.32.61.patch Tue Nov 25 16:37:48 2014 (r22085)
@@ -0,0 +1,7008 @@
+diff --git a/Makefile b/Makefile
+index b0e245e..e5a279c 100644
+diff --git a/arch/alpha/kernel/sys_nautilus.c b/arch/alpha/kernel/sys_nautilus.c
+index 99c0f46..dc616b3 100644
+--- a/arch/alpha/kernel/sys_nautilus.c
++++ b/arch/alpha/kernel/sys_nautilus.c
+@@ -189,6 +189,10 @@ nautilus_machine_check(unsigned long vector, unsigned long la_ptr)
+ extern void free_reserved_mem(void *, void *);
+ extern void pcibios_claim_one_bus(struct pci_bus *);
+
++static struct resource irongate_io = {
++ .name = "Irongate PCI IO",
++ .flags = IORESOURCE_IO,
++};
+ static struct resource irongate_mem = {
+ .name = "Irongate PCI MEM",
+ .flags = IORESOURCE_MEM,
+@@ -210,6 +214,7 @@ nautilus_init_pci(void)
+
+ irongate = pci_get_bus_and_slot(0, 0);
+ bus->self = irongate;
++ bus->resource[0] = &irongate_io;
+ bus->resource[1] = &irongate_mem;
+
+ pci_bus_size_bridges(bus);
+diff --git a/arch/arm/include/asm/signal.h b/arch/arm/include/asm/signal.h
+index 43ba0fb..559ee24 100644
+--- a/arch/arm/include/asm/signal.h
++++ b/arch/arm/include/asm/signal.h
+@@ -127,6 +127,7 @@ struct sigaction {
+ __sigrestore_t sa_restorer;
+ sigset_t sa_mask; /* mask last for extensibility */
+ };
++#define __ARCH_HAS_SA_RESTORER
+
+ struct k_sigaction {
+ struct sigaction sa;
+diff --git a/arch/avr32/include/asm/signal.h b/arch/avr32/include/asm/signal.h
+index 8790dfc..e6952a0 100644
+--- a/arch/avr32/include/asm/signal.h
++++ b/arch/avr32/include/asm/signal.h
+@@ -128,6 +128,7 @@ struct sigaction {
+ __sigrestore_t sa_restorer;
+ sigset_t sa_mask; /* mask last for extensibility */
+ };
++#define __ARCH_HAS_SA_RESTORER
+
+ struct k_sigaction {
+ struct sigaction sa;
+diff --git a/arch/cris/include/asm/signal.h b/arch/cris/include/asm/signal.h
+index ea6af9a..057fea2 100644
+--- a/arch/cris/include/asm/signal.h
++++ b/arch/cris/include/asm/signal.h
+@@ -122,6 +122,7 @@ struct sigaction {
+ void (*sa_restorer)(void);
+ sigset_t sa_mask; /* mask last for extensibility */
+ };
++#define __ARCH_HAS_SA_RESTORER
+
+ struct k_sigaction {
+ struct sigaction sa;
+diff --git a/arch/h8300/include/asm/signal.h b/arch/h8300/include/asm/signal.h
+index fd8b66e..8695707 100644
+--- a/arch/h8300/include/asm/signal.h
++++ b/arch/h8300/include/asm/signal.h
+@@ -121,6 +121,7 @@ struct sigaction {
+ void (*sa_restorer)(void);
+ sigset_t sa_mask; /* mask last for extensibility */
+ };
++#define __ARCH_HAS_SA_RESTORER
+
+ struct k_sigaction {
+ struct sigaction sa;
+diff --git a/arch/m32r/include/asm/signal.h b/arch/m32r/include/asm/signal.h
+index 9c1acb2..a96a9f4 100644
+--- a/arch/m32r/include/asm/signal.h
++++ b/arch/m32r/include/asm/signal.h
+@@ -123,6 +123,7 @@ struct sigaction {
+ __sigrestore_t sa_restorer;
+ sigset_t sa_mask; /* mask last for extensibility */
+ };
++#define __ARCH_HAS_SA_RESTORER
+
+ struct k_sigaction {
+ struct sigaction sa;
+diff --git a/arch/m68k/include/asm/signal.h b/arch/m68k/include/asm/signal.h
+index 5bc09c7..01a492a 100644
+--- a/arch/m68k/include/asm/signal.h
++++ b/arch/m68k/include/asm/signal.h
+@@ -119,6 +119,7 @@ struct sigaction {
+ __sigrestore_t sa_restorer;
+ sigset_t sa_mask; /* mask last for extensibility */
+ };
++#define __ARCH_HAS_SA_RESTORER
+
+ struct k_sigaction {
+ struct sigaction sa;
+diff --git a/arch/mips/Makefile b/arch/mips/Makefile
+index 77f5021..57ff855 100644
+--- a/arch/mips/Makefile
++++ b/arch/mips/Makefile
+@@ -657,7 +657,7 @@ KBUILD_CPPFLAGS += -D"DATAOFFSET=$(if $(dataoffset-y),$(dataoffset-y),0)"
+ LDFLAGS += -m $(ld-emul)
+
+ ifdef CONFIG_MIPS
+-CHECKFLAGS += $(shell $(CC) $(KBUILD_CFLAGS) -dM -E -xc /dev/null | \
++CHECKFLAGS += $(shell $(CC) $(KBUILD_CFLAGS) -dM -E -x c /dev/null | \
+ egrep -vw '__GNUC_(|MINOR_|PATCHLEVEL_)_' | \
+ sed -e 's/^\#define /-D/' -e "s/ /='/" -e "s/$$/'/")
+ ifdef CONFIG_64BIT
+diff --git a/arch/mips/kernel/Makefile b/arch/mips/kernel/Makefile
+index eecd2a9..700dc14 100644
+--- a/arch/mips/kernel/Makefile
++++ b/arch/mips/kernel/Makefile
+@@ -88,7 +88,7 @@ obj-$(CONFIG_GPIO_TXX9) += gpio_txx9.o
+ obj-$(CONFIG_KEXEC) += machine_kexec.o relocate_kernel.o
+ obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
+
+-CFLAGS_cpu-bugs64.o = $(shell if $(CC) $(KBUILD_CFLAGS) -Wa,-mdaddi -c -o /dev/null -xc /dev/null >/dev/null 2>&1; then echo "-DHAVE_AS_SET_DADDI"; fi)
++CFLAGS_cpu-bugs64.o = $(shell if $(CC) $(KBUILD_CFLAGS) -Wa,-mdaddi -c -o /dev/null -x c /dev/null >/dev/null 2>&1; then echo "-DHAVE_AS_SET_DADDI"; fi)
+
+ obj-$(CONFIG_HAVE_STD_PC_SERIAL_PORT) += 8250-platform.o
+
+diff --git a/arch/mn10300/include/asm/signal.h b/arch/mn10300/include/asm/signal.h
+index 7e891fc..045d6a2 100644
+--- a/arch/mn10300/include/asm/signal.h
++++ b/arch/mn10300/include/asm/signal.h
+@@ -131,6 +131,7 @@ struct sigaction {
+ __sigrestore_t sa_restorer;
+ sigset_t sa_mask; /* mask last for extensibility */
+ };
++#define __ARCH_HAS_SA_RESTORER
+
+ struct k_sigaction {
+ struct sigaction sa;
+diff --git a/arch/parisc/kernel/signal32.c b/arch/parisc/kernel/signal32.c
+index fb59852..32d43e7 100644
+--- a/arch/parisc/kernel/signal32.c
++++ b/arch/parisc/kernel/signal32.c
+@@ -68,7 +68,8 @@ put_sigset32(compat_sigset_t __user *up, sigset_t *set, size_t sz)
+ {
+ compat_sigset_t s;
+
+- if (sz != sizeof *set) panic("put_sigset32()");
++ if (sz != sizeof *set)
++ return -EINVAL;
+ sigset_64to32(&s, set);
+
+ return copy_to_user(up, &s, sizeof s);
+@@ -80,7 +81,8 @@ get_sigset32(compat_sigset_t __user *up, sigset_t *set, size_t sz)
+ compat_sigset_t s;
+ int r;
+
+- if (sz != sizeof *set) panic("put_sigset32()");
++ if (sz != sizeof *set)
++ return -EINVAL;
+
+ if ((r = copy_from_user(&s, up, sz)) == 0) {
+ sigset_32to64(set, &s);
+diff --git a/arch/powerpc/include/asm/signal.h b/arch/powerpc/include/asm/signal.h
+index 3eb13be..ec63a0a 100644
+--- a/arch/powerpc/include/asm/signal.h
++++ b/arch/powerpc/include/asm/signal.h
+@@ -109,6 +109,7 @@ struct sigaction {
+ __sigrestore_t sa_restorer;
+ sigset_t sa_mask; /* mask last for extensibility */
+ };
++#define __ARCH_HAS_SA_RESTORER
+
+ struct k_sigaction {
+ struct sigaction sa;
+diff --git a/arch/s390/include/asm/signal.h b/arch/s390/include/asm/signal.h
+index cdf5cb2..c872626 100644
+--- a/arch/s390/include/asm/signal.h
++++ b/arch/s390/include/asm/signal.h
+@@ -131,6 +131,7 @@ struct sigaction {
+ void (*sa_restorer)(void);
+ sigset_t sa_mask; /* mask last for extensibility */
+ };
++#define __ARCH_HAS_SA_RESTORER
+
+ struct k_sigaction {
+ struct sigaction sa;
+diff --git a/arch/sparc/include/asm/signal.h b/arch/sparc/include/asm/signal.h
+index e49b828..4929431 100644
+--- a/arch/sparc/include/asm/signal.h
++++ b/arch/sparc/include/asm/signal.h
+@@ -191,6 +191,7 @@ struct __old_sigaction {
+ unsigned long sa_flags;
+ void (*sa_restorer)(void); /* not used by Linux/SPARC yet */
+ };
++#define __ARCH_HAS_SA_RESTORER
+
+ typedef struct sigaltstack {
+ void __user *ss_sp;
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index aa889d6..ee0168d 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -1430,7 +1430,7 @@ config ARCH_USES_PG_UNCACHED
+
+ config ARCH_RANDOM
+ def_bool y
+- prompt "x86 architectural random number generator" if EXPERT
++ prompt "x86 architectural random number generator" if EMBEDDED
+ ---help---
+ Enable the x86 architectural RDRAND instruction
+ (Intel Bull Mountain technology) to generate random numbers.
+diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
+index af6fd36..1cce9d2 100644
+--- a/arch/x86/include/asm/pgtable.h
++++ b/arch/x86/include/asm/pgtable.h
+@@ -130,6 +130,11 @@ static inline unsigned long pmd_pfn(pmd_t pmd)
+ return (pmd_val(pmd) & PTE_PFN_MASK) >> PAGE_SHIFT;
+ }
+
++static inline unsigned long pud_pfn(pud_t pud)
++{
++ return (pud_val(pud) & PTE_PFN_MASK) >> PAGE_SHIFT;
++}
++
+ #define pte_page(pte) pfn_to_page(pte_pfn(pte))
+
+ static inline int pmd_large(pmd_t pte)
+diff --git a/arch/x86/include/asm/ptrace.h b/arch/x86/include/asm/ptrace.h
+index 0f0d908..e668d72 100644
+--- a/arch/x86/include/asm/ptrace.h
++++ b/arch/x86/include/asm/ptrace.h
+@@ -2,6 +2,7 @@
+ #define _ASM_X86_PTRACE_H
+
+ #include <linux/compiler.h> /* For __user */
++#include <linux/linkage.h> /* For asmregparm */
+ #include <asm/ptrace-abi.h>
+ #include <asm/processor-flags.h>
+
+@@ -142,8 +143,8 @@ extern void send_sigtrap(struct task_struct *tsk, struct pt_regs *regs,
+ int error_code, int si_code);
+ void signal_fault(struct pt_regs *regs, void __user *frame, char *where);
+
+-extern long syscall_trace_enter(struct pt_regs *);
+-extern void syscall_trace_leave(struct pt_regs *);
++extern asmregparm long syscall_trace_enter(struct pt_regs *);
++extern asmregparm void syscall_trace_leave(struct pt_regs *);
+
+ static inline unsigned long regs_return_value(struct pt_regs *regs)
+ {
+diff --git a/arch/x86/include/asm/signal.h b/arch/x86/include/asm/signal.h
+index 598457c..6cbc795 100644
+--- a/arch/x86/include/asm/signal.h
++++ b/arch/x86/include/asm/signal.h
+@@ -125,6 +125,8 @@ typedef unsigned long sigset_t;
+ extern void do_notify_resume(struct pt_regs *, void *, __u32);
+ # endif /* __KERNEL__ */
+
++#define __ARCH_HAS_SA_RESTORER
++
+ #ifdef __i386__
+ # ifdef __KERNEL__
+ struct old_sigaction {
+diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
+index 8928d97..d256bc3 100644
+--- a/arch/x86/kernel/apic/io_apic.c
++++ b/arch/x86/kernel/apic/io_apic.c
+@@ -4262,6 +4262,7 @@ static int bad_ioapic(unsigned long address)
+ void __init mp_register_ioapic(int id, u32 address, u32 gsi_base)
+ {
+ int idx = 0;
++ int entries;
+
+ if (bad_ioapic(address))
+ return;
+@@ -4280,10 +4281,14 @@ void __init mp_register_ioapic(int id, u32 address, u32 gsi_base)
+ * Build basic GSI lookup table to facilitate gsi->io_apic lookups
+ * and to prevent reprogramming of IOAPIC pins (PCI GSIs).
+ */
++ entries = io_apic_get_redir_entries(idx);
+ mp_gsi_routing[idx].gsi_base = gsi_base;
+- mp_gsi_routing[idx].gsi_end = gsi_base +
+- io_apic_get_redir_entries(idx);
++ mp_gsi_routing[idx].gsi_end = gsi_base + entries;
+
++ /*
++ * The number of IO-APIC IRQ registers (== #pins):
++ */
++ nr_ioapic_registers[idx] = entries + 1;
+ printk(KERN_INFO "IOAPIC[%d]: apic_id %d, version %d, address 0x%x, "
+ "GSI %d-%d\n", idx, mp_ioapics[idx].apicid,
+ mp_ioapics[idx].apicver, mp_ioapics[idx].apicaddr,
+diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c
+index 0f16a2b..28a7e4c8 100644
+--- a/arch/x86/kernel/cpu/mcheck/mce.c
++++ b/arch/x86/kernel/cpu/mcheck/mce.c
+@@ -431,6 +431,13 @@ static inline void mce_get_rip(struct mce *m, struct pt_regs *regs)
+ if (regs && (m->mcgstatus & (MCG_STATUS_RIPV|MCG_STATUS_EIPV))) {
+ m->ip = regs->ip;
+ m->cs = regs->cs;
++ /*
++ * When in VM86 mode make the cs look like ring 3
++ * always. This is a lie, but it's better than passing
++ * the additional vm86 bit around everywhere.
++ */
++ if (v8086_mode(regs))
++ m->cs |= 3;
+ } else {
+ m->ip = 0;
+ m->cs = 0;
+@@ -968,6 +975,7 @@ void do_machine_check(struct pt_regs *regs, long error_code)
+ */
+ add_taint(TAINT_MACHINE_CHECK);
+
++ mce_get_rip(&m, regs);
+ severity = mce_severity(&m, tolerant, NULL);
+
+ /*
+@@ -1006,7 +1014,6 @@ void do_machine_check(struct pt_regs *regs, long error_code)
+ if (severity == MCE_AO_SEVERITY && mce_usable_address(&m))
+ mce_ring_add(m.addr >> PAGE_SHIFT);
+
+- mce_get_rip(&m, regs);
+ mce_log(&m);
+
+ if (severity > worst) {
+diff --git a/arch/x86/kernel/efi.c b/arch/x86/kernel/efi.c
+index cdcfb12..a3e77af 100644
+--- a/arch/x86/kernel/efi.c
++++ b/arch/x86/kernel/efi.c
+@@ -459,9 +459,6 @@ void __init efi_init(void)
+ x86_platform.set_wallclock = efi_set_rtc_mmss;
+ #endif
+
+- /* Setup for EFI runtime service */
+- reboot_type = BOOT_EFI;
+-
+ #if EFI_DEBUG
+ print_efi_memmap();
+ #endif
+diff --git a/arch/x86/kernel/msr.c b/arch/x86/kernel/msr.c
+index 5eaeb5e..63a053b 100644
+--- a/arch/x86/kernel/msr.c
++++ b/arch/x86/kernel/msr.c
+@@ -176,6 +176,9 @@ static int msr_open(struct inode *inode, struct file *file)
+ struct cpuinfo_x86 *c = &cpu_data(cpu);
+ int ret = 0;
+
++ if (!capable(CAP_SYS_RAWIO))
++ return -EPERM;
++
+ lock_kernel();
+ cpu = iminor(file->f_path.dentry->d_inode);
+
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 271fddf..cdee77e 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -925,6 +925,12 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data)
+ /* ...but clean it before doing the actual write */
+ vcpu->arch.time_offset = data & ~(PAGE_MASK | 1);
+
++ /* Check that address+len does not cross page boundary */
++ if ((vcpu->arch.time_offset +
++ sizeof(struct pvclock_vcpu_time_info) - 1)
++ & PAGE_MASK)
++ break;
++
+ vcpu->arch.time_page =
+ gfn_to_page(vcpu->kvm, data >> PAGE_SHIFT);
+
+@@ -4713,6 +4719,9 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
+ int pending_vec, max_bits;
+ struct descriptor_table dt;
+
++ if (sregs->cr4 & X86_CR4_OSXSAVE)
++ return -EINVAL;
++
+ vcpu_load(vcpu);
+
+ dt.limit = sregs->idt.limit;
+diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
+index 249ad57..df87450 100644
+--- a/arch/x86/mm/fault.c
++++ b/arch/x86/mm/fault.c
+@@ -376,10 +376,12 @@ static noinline int vmalloc_fault(unsigned long address)
+ if (pgd_none(*pgd_ref))
+ return -1;
+
+- if (pgd_none(*pgd))
++ if (pgd_none(*pgd)) {
+ set_pgd(pgd, *pgd_ref);
+- else
++ arch_flush_lazy_mmu_mode();
++ } else {
+ BUG_ON(pgd_page_vaddr(*pgd) != pgd_page_vaddr(*pgd_ref));
++ }
+
+ /*
+ * Below here mismatches are bugs because these lower tables
+diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
+index 7d095ad..ccbc61b 100644
+--- a/arch/x86/mm/init_64.c
++++ b/arch/x86/mm/init_64.c
+@@ -839,6 +839,9 @@ int kern_addr_valid(unsigned long addr)
+ if (pud_none(*pud))
+ return 0;
+
++ if (pud_large(*pud))
++ return pfn_valid(pud_pfn(*pud));
++
+ pmd = pmd_offset(pud, addr);
+ if (pmd_none(*pmd))
+ return 0;
+diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
+index d52f895..126a093 100644
+--- a/arch/x86/xen/enlighten.c
++++ b/arch/x86/xen/enlighten.c
+@@ -776,7 +776,16 @@ static void xen_write_cr4(unsigned long cr4)
+
+ native_write_cr4(cr4);
+ }
+-
++#ifdef CONFIG_X86_64
++static inline unsigned long xen_read_cr8(void)
++{
++ return 0;
++}
++static inline void xen_write_cr8(unsigned long val)
++{
++ BUG_ON(val);
++}
++#endif
+ static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
+ {
+ int ret;
+@@ -942,6 +951,11 @@ static const struct pv_cpu_ops xen_cpu_ops __initdata = {
+ .read_cr4_safe = native_read_cr4_safe,
+ .write_cr4 = xen_write_cr4,
+
++#ifdef CONFIG_X86_64
++ .read_cr8 = xen_read_cr8,
++ .write_cr8 = xen_write_cr8,
++#endif
++
+ .wbinvd = native_wbinvd,
+
+ .read_msr = native_read_msr_safe,
+@@ -952,6 +966,8 @@ static const struct pv_cpu_ops xen_cpu_ops __initdata = {
+ .read_tsc = native_read_tsc,
+ .read_pmc = native_read_pmc,
+
++ .read_tscp = native_read_tscp,
++
+ .iret = xen_iret,
+ .irq_enable_sysexit = xen_sysexit,
+ #ifdef CONFIG_X86_64
+diff --git a/arch/x86/xen/xen-asm_32.S b/arch/x86/xen/xen-asm_32.S
+index 9a95a9c..d05bd11 100644
+--- a/arch/x86/xen/xen-asm_32.S
++++ b/arch/x86/xen/xen-asm_32.S
+@@ -88,11 +88,11 @@ ENTRY(xen_iret)
+ */
+ #ifdef CONFIG_SMP
+ GET_THREAD_INFO(%eax)
+- movl TI_cpu(%eax), %eax
+- movl __per_cpu_offset(,%eax,4), %eax
+- mov per_cpu__xen_vcpu(%eax), %eax
++ movl %ss:TI_cpu(%eax), %eax
++ movl %ss:__per_cpu_offset(,%eax,4), %eax
++ mov %ss:per_cpu__xen_vcpu(%eax), %eax
+ #else
+- movl per_cpu__xen_vcpu, %eax
++ movl %ss:per_cpu__xen_vcpu, %eax
+ #endif
+
+ /* check IF state we're restoring */
+@@ -105,11 +105,11 @@ ENTRY(xen_iret)
+ * resuming the code, so we don't have to be worried about
+ * being preempted to another CPU.
+ */
+- setz XEN_vcpu_info_mask(%eax)
++ setz %ss:XEN_vcpu_info_mask(%eax)
+ xen_iret_start_crit:
+
+ /* check for unmasked and pending */
+- cmpw $0x0001, XEN_vcpu_info_pending(%eax)
++ cmpw $0x0001, %ss:XEN_vcpu_info_pending(%eax)
+
+ /*
+ * If there's something pending, mask events again so we can
+@@ -117,7 +117,7 @@ xen_iret_start_crit:
+ * touch XEN_vcpu_info_mask.
+ */
+ jne 1f
+- movb $1, XEN_vcpu_info_mask(%eax)
++ movb $1, %ss:XEN_vcpu_info_mask(%eax)
+
+ 1: popl %eax
+
+diff --git a/arch/xtensa/include/asm/signal.h b/arch/xtensa/include/asm/signal.h
+index 633ba73..75edf8a 100644
+--- a/arch/xtensa/include/asm/signal.h
++++ b/arch/xtensa/include/asm/signal.h
+@@ -133,6 +133,7 @@ struct sigaction {
+ void (*sa_restorer)(void);
+ sigset_t sa_mask; /* mask last for extensibility */
+ };
++#define __ARCH_HAS_SA_RESTORER
+
+ struct k_sigaction {
+ struct sigaction sa;
+diff --git a/block/blk-core.c b/block/blk-core.c
+index cffd737..4058f46 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -865,6 +865,9 @@ struct request *blk_get_request(struct request_queue *q, int rw, gfp_t gfp_mask)
+ {
+ struct request *rq;
+
++ if (unlikely(test_bit(QUEUE_FLAG_DEAD, &q->queue_flags)))
++ return NULL;
++
+ BUG_ON(rw != READ && rw != WRITE);
+
+ spin_lock_irq(q->queue_lock);
+@@ -1149,7 +1152,7 @@ void init_request_from_bio(struct request *req, struct bio *bio)
+ */
+ static inline bool queue_should_plug(struct request_queue *q)
+ {
+- return !(blk_queue_nonrot(q) && blk_queue_queuing(q));
++ return !(blk_queue_nonrot(q) && blk_queue_tagged(q));
+ }
+
+ static int __make_request(struct request_queue *q, struct bio *bio)
+@@ -1861,15 +1864,8 @@ void blk_dequeue_request(struct request *rq)
+ * and to it is freed is accounted as io that is in progress at
+ * the driver side.
+ */
+- if (blk_account_rq(rq)) {
++ if (blk_account_rq(rq))
+ q->in_flight[rq_is_sync(rq)]++;
+- /*
+- * Mark this device as supporting hardware queuing, if
+- * we have more IOs in flight than 4.
+- */
+- if (!blk_queue_queuing(q) && queue_in_flight(q) > 4)
+- set_bit(QUEUE_FLAG_CQ, &q->queue_flags);
+- }
+ }
+
+ /**
+diff --git a/block/blk-exec.c b/block/blk-exec.c
+index 49557e9..85bd7b4 100644
+--- a/block/blk-exec.c
++++ b/block/blk-exec.c
+@@ -50,6 +50,13 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
+ {
+ int where = at_head ? ELEVATOR_INSERT_FRONT : ELEVATOR_INSERT_BACK;
+
++ if (unlikely(test_bit(QUEUE_FLAG_DEAD, &q->queue_flags))) {
++ rq->errors = -ENXIO;
++ if (rq->end_io)
++ rq->end_io(rq, rq->errors);
++ return;
++ }
++
+ rq->rq_disk = bd_disk;
+ rq->end_io = done;
+ WARN_ON(irqs_disabled());
+diff --git a/block/scsi_ioctl.c b/block/scsi_ioctl.c
+index 2be0a97..123eb17 100644
+--- a/block/scsi_ioctl.c
++++ b/block/scsi_ioctl.c
+@@ -720,11 +720,14 @@ int scsi_verify_blk_ioctl(struct block_device *bd, unsigned int cmd)
+ break;
+ }
+
++ if (capable(CAP_SYS_RAWIO))
++ return 0;
++
+ /* In particular, rule out all resets and host-specific ioctls. */
+ printk_ratelimited(KERN_WARNING
+ "%s: sending ioctl %x to a partition!\n", current->comm, cmd);
+
+- return capable(CAP_SYS_RAWIO) ? 0 : -ENOTTY;
++ return -ENOTTY;
+ }
+ EXPORT_SYMBOL(scsi_verify_blk_ioctl);
+
+diff --git a/crypto/cryptd.c b/crypto/cryptd.c
+index 3533582..9e1bf69 100644
+--- a/crypto/cryptd.c
++++ b/crypto/cryptd.c
+@@ -116,13 +116,18 @@ static void cryptd_queue_worker(struct work_struct *work)
+ struct crypto_async_request *req, *backlog;
+
+ cpu_queue = container_of(work, struct cryptd_cpu_queue, work);
+- /* Only handle one request at a time to avoid hogging crypto
+- * workqueue. preempt_disable/enable is used to prevent
+- * being preempted by cryptd_enqueue_request() */
++ /*
++ * Only handle one request at a time to avoid hogging crypto workqueue.
++ * preempt_disable/enable is used to prevent being preempted by
++ * cryptd_enqueue_request(). local_bh_disable/enable is used to prevent
++ * cryptd_enqueue_request() being accessed from software interrupts.
++ */
++ local_bh_disable();
+ preempt_disable();
+ backlog = crypto_get_backlog(&cpu_queue->queue);
+ req = crypto_dequeue_request(&cpu_queue->queue);
+ preempt_enable();
++ local_bh_enable();
+
+ if (!req)
+ return;
+diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
+index a6ad608..70e9ed1 100644
+--- a/drivers/acpi/processor_idle.c
++++ b/drivers/acpi/processor_idle.c
+@@ -1071,6 +1071,9 @@ static int acpi_processor_setup_cpuidle(struct acpi_processor *pr)
+ return -EINVAL;
+ }
+
++ if (!dev)
++ return -EINVAL;
++
+ dev->cpu = pr->id;
+ for (i = 0; i < CPUIDLE_STATE_MAX; i++) {
+ dev->states[i].name[0] = '\0';
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index 553edcc..57e895a1 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -338,7 +338,8 @@ ata_scsi_activity_show(struct device *dev, struct device_attribute *attr,
+ struct ata_port *ap = ata_shost_to_port(sdev->host);
+ struct ata_device *atadev = ata_scsi_find_dev(ap, sdev);
+
+- if (ap->ops->sw_activity_show && (ap->flags & ATA_FLAG_SW_ACTIVITY))
++ if (atadev && ap->ops->sw_activity_show &&
++ (ap->flags & ATA_FLAG_SW_ACTIVITY))
+ return ap->ops->sw_activity_show(atadev, buf);
+ return -EINVAL;
+ }
+@@ -353,7 +354,8 @@ ata_scsi_activity_store(struct device *dev, struct device_attribute *attr,
+ enum sw_activity val;
+ int rc;
+
+- if (ap->ops->sw_activity_store && (ap->flags & ATA_FLAG_SW_ACTIVITY)) {
++ if (atadev && ap->ops->sw_activity_store &&
++ (ap->flags & ATA_FLAG_SW_ACTIVITY)) {
+ val = simple_strtoul(buf, NULL, 0);
+ switch (val) {
+ case OFF: case BLINK_ON: case BLINK_OFF:
+diff --git a/drivers/base/bus.c b/drivers/base/bus.c
+index 63c143e..6f1ba10 100644
+--- a/drivers/base/bus.c
++++ b/drivers/base/bus.c
+@@ -289,7 +289,7 @@ int bus_for_each_dev(struct bus_type *bus, struct device *start,
+ struct device *dev;
+ int error = 0;
+
+- if (!bus)
++ if (!bus || !bus->p)
+ return -EINVAL;
+
+ klist_iter_init_node(&bus->p->klist_devices, &i,
+@@ -323,7 +323,7 @@ struct device *bus_find_device(struct bus_type *bus,
+ struct klist_iter i;
+ struct device *dev;
+
+- if (!bus)
++ if (!bus || !bus->p)
+ return NULL;
+
+ klist_iter_init_node(&bus->p->klist_devices, &i,
+diff --git a/drivers/char/ipmi/ipmi_bt_sm.c b/drivers/char/ipmi/ipmi_bt_sm.c
+index 7b98c06..a65a574 100644
+--- a/drivers/char/ipmi/ipmi_bt_sm.c
++++ b/drivers/char/ipmi/ipmi_bt_sm.c
+@@ -95,9 +95,9 @@ struct si_sm_data {
+ enum bt_states state;
+ unsigned char seq; /* BT sequence number */
+ struct si_sm_io *io;
+- unsigned char write_data[IPMI_MAX_MSG_LENGTH];
++ unsigned char write_data[IPMI_MAX_MSG_LENGTH + 2]; /* +2 for memcpy */
+ int write_count;
+- unsigned char read_data[IPMI_MAX_MSG_LENGTH];
++ unsigned char read_data[IPMI_MAX_MSG_LENGTH + 2]; /* +2 for memcpy */
+ int read_count;
+ int truncated;
+ long timeout; /* microseconds countdown */
+diff --git a/drivers/firmware/pcdp.c b/drivers/firmware/pcdp.c
+index a330492..51e0e2d 100644
+--- a/drivers/firmware/pcdp.c
++++ b/drivers/firmware/pcdp.c
+@@ -95,7 +95,7 @@ efi_setup_pcdp_console(char *cmdline)
+ if (efi.hcdp == EFI_INVALID_TABLE_ADDR)
+ return -ENODEV;
+
+- pcdp = early_ioremap(efi.hcdp, 4096);
++ pcdp = ioremap(efi.hcdp, 4096);
+ printk(KERN_INFO "PCDP: v%d at 0x%lx\n", pcdp->rev, efi.hcdp);
+
+ if (strstr(cmdline, "console=hcdp")) {
+@@ -131,6 +131,6 @@ efi_setup_pcdp_console(char *cmdline)
+ }
+
+ out:
+- early_iounmap(pcdp, 4096);
++ iounmap(pcdp);
+ return rc;
+ }
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+index b4b2257..f6a23ec 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+@@ -157,7 +157,7 @@ static int ipoib_stop(struct net_device *dev)
+
+ netif_stop_queue(dev);
+
+- ipoib_ib_dev_down(dev, 0);
++ ipoib_ib_dev_down(dev, 1);
+ ipoib_ib_dev_stop(dev, 0);
+
+ if (!test_bit(IPOIB_FLAG_SUBINTERFACE, &priv->flags)) {
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+index 8763c1e..bd656a7 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+@@ -188,7 +188,9 @@ static int ipoib_mcast_join_finish(struct ipoib_mcast *mcast,
+
+ mcast->mcmember = *mcmember;
+
+- /* Set the cached Q_Key before we attach if it's the broadcast group */
++ /* Set the multicast MTU and cached Q_Key before we attach if it's
++ * the broadcast group.
++ */
+ if (!memcmp(mcast->mcmember.mgid.raw, priv->dev->broadcast + 4,
+ sizeof (union ib_gid))) {
+ spin_lock_irq(&priv->lock);
+@@ -196,10 +198,17 @@ static int ipoib_mcast_join_finish(struct ipoib_mcast *mcast,
+ spin_unlock_irq(&priv->lock);
+ return -EAGAIN;
+ }
++ priv->mcast_mtu = IPOIB_UD_MTU(ib_mtu_enum_to_int(priv->broadcast->mcmember.mtu));
+ priv->qkey = be32_to_cpu(priv->broadcast->mcmember.qkey);
+ spin_unlock_irq(&priv->lock);
+ priv->tx_wr.wr.ud.remote_qkey = priv->qkey;
+ set_qkey = 1;
++
++ if (!ipoib_cm_admin_enabled(dev)) {
++ rtnl_lock();
++ dev_set_mtu(dev, min(priv->mcast_mtu, priv->admin_mtu));
++ rtnl_unlock();
++ }
+ }
+
+ if (!test_bit(IPOIB_MCAST_FLAG_SENDONLY, &mcast->flags)) {
+@@ -588,14 +597,6 @@ void ipoib_mcast_join_task(struct work_struct *work)
+ return;
+ }
+
+- priv->mcast_mtu = IPOIB_UD_MTU(ib_mtu_enum_to_int(priv->broadcast->mcmember.mtu));
+-
+- if (!ipoib_cm_admin_enabled(dev)) {
+- rtnl_lock();
+- dev_set_mtu(dev, min(priv->mcast_mtu, priv->admin_mtu));
+- rtnl_unlock();
+- }
+-
+ ipoib_dbg_mcast(priv, "successfully joined all multicast groups\n");
+
+ clear_bit(IPOIB_MCAST_RUN, &priv->flags);
+diff --git a/drivers/isdn/isdnloop/isdnloop.c b/drivers/isdn/isdnloop/isdnloop.c
+index a335c85..22446f7 100644
+--- a/drivers/isdn/isdnloop/isdnloop.c
++++ b/drivers/isdn/isdnloop/isdnloop.c
+@@ -15,7 +15,6 @@
+ #include <linux/sched.h>
+ #include "isdnloop.h"
+
+-static char *revision = "$Revision: 1.11.6.7 $";
+ static char *isdnloop_id = "loop0";
+
+ MODULE_DESCRIPTION("ISDN4Linux: Pseudo Driver that simulates an ISDN card");
+@@ -1493,17 +1492,6 @@ isdnloop_addcard(char *id1)
+ static int __init
+ isdnloop_init(void)
+ {
+- char *p;
+- char rev[10];
+-
+- if ((p = strchr(revision, ':'))) {
+- strcpy(rev, p + 1);
+- p = strchr(rev, '$');
+- *p = 0;
+- } else
+- strcpy(rev, " ??? ");
+- printk(KERN_NOTICE "isdnloop-ISDN-driver Rev%s\n", rev);
+-
+ if (isdnloop_id)
+ return (isdnloop_addcard(isdnloop_id));
+
+diff --git a/drivers/net/bonding/bonding.h b/drivers/net/bonding/bonding.h
+index 6824771..5d127fc 100644
+--- a/drivers/net/bonding/bonding.h
++++ b/drivers/net/bonding/bonding.h
+@@ -236,11 +236,11 @@ static inline struct slave *bond_get_slave_by_dev(struct bonding *bond, struct n
+
+ bond_for_each_slave(bond, slave, i) {
+ if (slave->dev == slave_dev) {
+- break;
++ return slave;
+ }
+ }
+
+- return slave;
++ return 0;
+ }
+
+ static inline struct bonding *bond_get_bond_by_slave(struct slave *slave)
+diff --git a/drivers/net/r8169.c b/drivers/net/r8169.c
+index 3ebe50c..7ddbb8e 100644
+diff --git a/drivers/net/tg3.c b/drivers/net/tg3.c
+index fd6622c..89aa69c 100644
+diff --git a/drivers/net/wireless/b43legacy/main.c b/drivers/net/wireless/b43legacy/main.c
+index c3968fad..fc0fc85 100644
+--- a/drivers/net/wireless/b43legacy/main.c
++++ b/drivers/net/wireless/b43legacy/main.c
+@@ -3870,6 +3870,8 @@ static void b43legacy_remove(struct ssb_device *dev)
+ cancel_work_sync(&wldev->restart_work);
+
+ B43legacy_WARN_ON(!wl);
++ if (!wldev->fw.ucode)
++ return; /* NULL if fw never loaded */
+ if (wl->current_dev == wldev)
+ ieee80211_unregister_hw(wl->hw);
+
+diff --git a/drivers/scsi/bnx2i/bnx2i_hwi.c b/drivers/scsi/bnx2i/bnx2i_hwi.c
+index 5c8d763..1ab55d6 100644
+--- a/drivers/scsi/bnx2i/bnx2i_hwi.c
++++ b/drivers/scsi/bnx2i/bnx2i_hwi.c
+@@ -1156,6 +1156,9 @@ int bnx2i_send_fw_iscsi_init_msg(struct bnx2i_hba *hba)
+ int rc = 0;
+ u64 mask64;
+
++ memset(&iscsi_init, 0x00, sizeof(struct iscsi_kwqe_init1));
++ memset(&iscsi_init2, 0x00, sizeof(struct iscsi_kwqe_init2));
++
+ bnx2i_adjust_qp_size(hba);
+
+ iscsi_init.flags =
+diff --git a/drivers/scsi/mpt2sas/mpt2sas_ctl.c b/drivers/scsi/mpt2sas/mpt2sas_ctl.c
+index 7767b8f..48ae81b 100644
+--- a/drivers/scsi/mpt2sas/mpt2sas_ctl.c
++++ b/drivers/scsi/mpt2sas/mpt2sas_ctl.c
+@@ -750,8 +750,11 @@ _ctl_do_mpt_command(struct MPT2SAS_ADAPTER *ioc,
+ (u32)mpt2sas_base_get_sense_buffer_dma(ioc, smid);
+ priv_sense = mpt2sas_base_get_sense_buffer(ioc, smid);
+ memset(priv_sense, 0, SCSI_SENSE_BUFFERSIZE);
+- mpt2sas_base_put_smid_scsi_io(ioc, smid,
+- le16_to_cpu(mpi_request->FunctionDependent1));
++ if (mpi_request->Function == MPI2_FUNCTION_SCSI_IO_REQUEST)
++ mpt2sas_base_put_smid_scsi_io(ioc, smid,
++ le16_to_cpu(mpi_request->FunctionDependent1));
++ else
++ mpt2sas_base_put_smid_default(ioc, smid);
+ break;
+ }
+ case MPI2_FUNCTION_SCSI_TASK_MGMT:
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index e28f9b0..933f1c5 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -215,6 +215,8 @@ int scsi_execute(struct scsi_device *sdev, const unsigned char *cmd,
+ int ret = DRIVER_ERROR << 24;
+
+ req = blk_get_request(sdev->request_queue, write, __GFP_WAIT);
++ if (!req)
++ return ret;
+
+ if (bufflen && blk_rq_map_kern(sdev->request_queue, req,
+ buffer, bufflen, __GFP_WAIT))
+diff --git a/drivers/serial/8250.c b/drivers/serial/8250.c
+index 6a451e8..12e1e9e 100644
+--- a/drivers/serial/8250.c
++++ b/drivers/serial/8250.c
+@@ -81,7 +81,7 @@ static unsigned int skip_txen_test; /* force skip of txen test at init time */
+ #define DEBUG_INTR(fmt...) do { } while (0)
+ #endif
+
+-#define PASS_LIMIT 256
++#define PASS_LIMIT 512
+
+ #define BOTH_EMPTY (UART_LSR_TEMT | UART_LSR_THRE)
+
+diff --git a/drivers/staging/comedi/comedi_fops.c b/drivers/staging/comedi/comedi_fops.c
+index 908f25a..90810e8 100644
+--- a/drivers/staging/comedi/comedi_fops.c
++++ b/drivers/staging/comedi/comedi_fops.c
+@@ -809,7 +809,7 @@ static int parse_insn(struct comedi_device *dev, struct comedi_insn *insn,
+ ret = -EAGAIN;
+ break;
+ }
+- ret = s->async->inttrig(dev, s, insn->data[0]);
++ ret = s->async->inttrig(dev, s, data[0]);
+ if (ret >= 0)
+ ret = 1;
+ break;
+@@ -1035,7 +1035,6 @@ static int do_cmd_ioctl(struct comedi_device *dev, void *arg, void *file)
+ goto cleanup;
+ }
+
+- kfree(async->cmd.chanlist);
+ async->cmd = user_cmd;
+ async->cmd.data = NULL;
+ /* load channel/gain list */
+@@ -1499,7 +1498,7 @@ static unsigned int comedi_poll(struct file *file, poll_table * wait)
+
+ mask = 0;
+ read_subdev = comedi_get_read_subdevice(dev_file_info);
+- if (read_subdev) {
++ if (read_subdev && read_subdev->async) {
+ poll_wait(file, &read_subdev->async->wait_head, wait);
+ if (!read_subdev->busy
+ || comedi_buf_read_n_available(read_subdev->async) > 0
+@@ -1509,7 +1508,7 @@ static unsigned int comedi_poll(struct file *file, poll_table * wait)
+ }
+ }
+ write_subdev = comedi_get_write_subdevice(dev_file_info);
+- if (write_subdev) {
++ if (write_subdev && write_subdev->async) {
+ poll_wait(file, &write_subdev->async->wait_head, wait);
+ comedi_buf_write_alloc(write_subdev->async,
+ write_subdev->async->prealloc_bufsz);
+@@ -1551,7 +1550,7 @@ static ssize_t comedi_write(struct file *file, const char *buf, size_t nbytes,
+ }
+
+ s = comedi_get_write_subdevice(dev_file_info);
+- if (s == NULL) {
++ if (s == NULL || s->async == NULL) {
+ retval = -EIO;
+ goto done;
+ }
+@@ -1659,7 +1658,7 @@ static ssize_t comedi_read(struct file *file, char *buf, size_t nbytes,
+ }
+
+ s = comedi_get_read_subdevice(dev_file_info);
+- if (s == NULL) {
++ if (s == NULL || s->async == NULL) {
+ retval = -EIO;
+ goto done;
+ }
+@@ -1759,6 +1758,8 @@ void do_become_nonbusy(struct comedi_device *dev, struct comedi_subdevice *s)
+ if (async) {
+ comedi_reset_async_buf(async);
+ async->inttrig = NULL;
++ kfree(async->cmd.chanlist);
++ async->cmd.chanlist = NULL;
+ } else {
+ printk(KERN_ERR
+ "BUG: (?) do_become_nonbusy called with async=0\n");
+diff --git a/drivers/staging/comedi/drivers/comedi_test.c b/drivers/staging/comedi/drivers/comedi_test.c
+index ef83a1a..7a1e2e8 100644
+--- a/drivers/staging/comedi/drivers/comedi_test.c
++++ b/drivers/staging/comedi/drivers/comedi_test.c
+@@ -450,7 +450,7 @@ static int waveform_ai_cancel(struct comedi_device *dev,
+ struct comedi_subdevice *s)
+ {
+ devpriv->timer_running = 0;
+- del_timer(&devpriv->timer);
++ del_timer_sync(&devpriv->timer);
+ return 0;
+ }
+
+diff --git a/drivers/staging/comedi/drivers/das08.c b/drivers/staging/comedi/drivers/das08.c
+index f425833..c05cb4b 100644
+--- a/drivers/staging/comedi/drivers/das08.c
++++ b/drivers/staging/comedi/drivers/das08.c
+@@ -652,7 +652,7 @@ static int das08jr_ao_winsn(struct comedi_device *dev,
+ int chan;
+
+ lsb = data[0] & 0xff;
+- msb = (data[0] >> 8) & 0xf;
++ msb = (data[0] >> 8) & 0xff;
+
+ chan = CR_CHAN(insn->chanspec);
+
+diff --git a/drivers/staging/comedi/drivers/jr3_pci.c b/drivers/staging/comedi/drivers/jr3_pci.c
+index 1d6385a..ae6f40c 100644
+--- a/drivers/staging/comedi/drivers/jr3_pci.c
++++ b/drivers/staging/comedi/drivers/jr3_pci.c
+@@ -917,7 +917,7 @@ static int jr3_pci_attach(struct comedi_device *dev,
+ }
+
+ /* Reset DSP card */
+- devpriv->iobase->channel[0].reset = 0;
++ writel(0, &devpriv->iobase->channel[0].reset);
+
+ result = comedi_load_firmware(dev, "jr3pci.idm", jr3_download_firmware);
+ printk("Firmare load %d\n", result);
+diff --git a/drivers/staging/comedi/drivers/ni_labpc.c b/drivers/staging/comedi/drivers/ni_labpc.c
+index 4ac745a..76ca73a 100644
+--- a/drivers/staging/comedi/drivers/ni_labpc.c
++++ b/drivers/staging/comedi/drivers/ni_labpc.c
+@@ -1178,7 +1178,9 @@ static int labpc_ai_cmd(struct comedi_device *dev, struct comedi_subdevice *s)
+ else
+ channel = CR_CHAN(cmd->chanlist[0]);
+ /* munge channel bits for differential / scan disabled mode */
+- if (labpc_ai_scan_mode(cmd) != MODE_SINGLE_CHAN && aref == AREF_DIFF)
++ if ((labpc_ai_scan_mode(cmd) == MODE_SINGLE_CHAN ||
++ labpc_ai_scan_mode(cmd) == MODE_SINGLE_CHAN_INTERVAL) &&
++ aref == AREF_DIFF)
+ channel *= 2;
+ devpriv->command1_bits |= ADC_CHAN_BITS(channel);
+ devpriv->command1_bits |= thisboard->ai_range_code[range];
+@@ -1193,21 +1195,6 @@ static int labpc_ai_cmd(struct comedi_device *dev, struct comedi_subdevice *s)
+ devpriv->write_byte(devpriv->command1_bits,
+ dev->iobase + COMMAND1_REG);
+ }
+- /* setup any external triggering/pacing (command4 register) */
+- devpriv->command4_bits = 0;
+- if (cmd->convert_src != TRIG_EXT)
+- devpriv->command4_bits |= EXT_CONVERT_DISABLE_BIT;
+- /* XXX should discard first scan when using interval scanning
+- * since manual says it is not synced with scan clock */
+- if (labpc_use_continuous_mode(cmd) == 0) {
+- devpriv->command4_bits |= INTERVAL_SCAN_EN_BIT;
+- if (cmd->scan_begin_src == TRIG_EXT)
+- devpriv->command4_bits |= EXT_SCAN_EN_BIT;
+- }
+- /* single-ended/differential */
+- if (aref == AREF_DIFF)
+- devpriv->command4_bits |= ADC_DIFF_BIT;
+- devpriv->write_byte(devpriv->command4_bits, dev->iobase + COMMAND4_REG);
+
+ devpriv->write_byte(cmd->chanlist_len,
+ dev->iobase + INTERVAL_COUNT_REG);
+@@ -1285,6 +1272,22 @@ static int labpc_ai_cmd(struct comedi_device *dev, struct comedi_subdevice *s)
+ devpriv->command3_bits &= ~ADC_FNE_INTR_EN_BIT;
+ devpriv->write_byte(devpriv->command3_bits, dev->iobase + COMMAND3_REG);
+
++ /* setup any external triggering/pacing (command4 register) */
++ devpriv->command4_bits = 0;
++ if (cmd->convert_src != TRIG_EXT)
++ devpriv->command4_bits |= EXT_CONVERT_DISABLE_BIT;
++ /* XXX should discard first scan when using interval scanning
++ * since manual says it is not synced with scan clock */
++ if (labpc_use_continuous_mode(cmd) == 0) {
++ devpriv->command4_bits |= INTERVAL_SCAN_EN_BIT;
++ if (cmd->scan_begin_src == TRIG_EXT)
++ devpriv->command4_bits |= EXT_SCAN_EN_BIT;
++ }
++ /* single-ended/differential */
++ if (aref == AREF_DIFF)
++ devpriv->command4_bits |= ADC_DIFF_BIT;
++ devpriv->write_byte(devpriv->command4_bits, dev->iobase + COMMAND4_REG);
++
+ /* startup aquisition */
+
+ /* command2 reg */
+diff --git a/drivers/staging/comedi/drivers/s626.c b/drivers/staging/comedi/drivers/s626.c
+index 80d2787..7a7c29f 100644
+--- a/drivers/staging/comedi/drivers/s626.c
++++ b/drivers/staging/comedi/drivers/s626.c
+@@ -2330,7 +2330,7 @@ static int s626_enc_insn_config(struct comedi_device *dev,
+ /* (data==NULL) ? (Preloadvalue=0) : (Preloadvalue=data[0]); */
+
+ k->SetMode(dev, k, Setup, TRUE);
+- Preload(dev, k, *(insn->data));
++ Preload(dev, k, data[0]);
+ k->PulseIndex(dev, k);
+ SetLatchSource(dev, k, valueSrclatch);
+ k->SetEnable(dev, k, (uint16_t) (enab != 0));
+diff --git a/drivers/staging/vt6656/rf.c b/drivers/staging/vt6656/rf.c
+index 405c4f7..9d059de 100644
+--- a/drivers/staging/vt6656/rf.c
++++ b/drivers/staging/vt6656/rf.c
+@@ -769,6 +769,9 @@ BYTE byPwr = pDevice->byCCKPwr;
+ return TRUE;
+ }
+
++ if (uCH == 0)
++ return -EINVAL;
++
+ switch (uRATE) {
+ case RATE_1M:
+ case RATE_2M:
+diff --git a/drivers/telephony/ixj.c b/drivers/telephony/ixj.c
+index 40de151..56eb6cc 100644
+--- a/drivers/telephony/ixj.c
++++ b/drivers/telephony/ixj.c
+@@ -3190,12 +3190,12 @@ static void ixj_write_cid(IXJ *j)
+
+ ixj_fsk_alloc(j);
+
+- strcpy(sdmf1, j->cid_send.month);
+- strcat(sdmf1, j->cid_send.day);
+- strcat(sdmf1, j->cid_send.hour);
+- strcat(sdmf1, j->cid_send.min);
+- strcpy(sdmf2, j->cid_send.number);
+- strcpy(sdmf3, j->cid_send.name);
++ strlcpy(sdmf1, j->cid_send.month, sizeof(sdmf1));
++ strlcat(sdmf1, j->cid_send.day, sizeof(sdmf1));
++ strlcat(sdmf1, j->cid_send.hour, sizeof(sdmf1));
++ strlcat(sdmf1, j->cid_send.min, sizeof(sdmf1));
++ strlcpy(sdmf2, j->cid_send.number, sizeof(sdmf2));
++ strlcpy(sdmf3, j->cid_send.name, sizeof(sdmf3));
+
+ len1 = strlen(sdmf1);
+ len2 = strlen(sdmf2);
+@@ -3340,12 +3340,12 @@ static void ixj_write_cidcw(IXJ *j)
+ ixj_pre_cid(j);
+ }
+ j->flags.cidcw_ack = 0;
+- strcpy(sdmf1, j->cid_send.month);
+- strcat(sdmf1, j->cid_send.day);
+- strcat(sdmf1, j->cid_send.hour);
+- strcat(sdmf1, j->cid_send.min);
+- strcpy(sdmf2, j->cid_send.number);
+- strcpy(sdmf3, j->cid_send.name);
++ strlcpy(sdmf1, j->cid_send.month, sizeof(sdmf1));
++ strlcat(sdmf1, j->cid_send.day, sizeof(sdmf1));
++ strlcat(sdmf1, j->cid_send.hour, sizeof(sdmf1));
++ strlcat(sdmf1, j->cid_send.min, sizeof(sdmf1));
++ strlcpy(sdmf2, j->cid_send.number, sizeof(sdmf2));
++ strlcpy(sdmf3, j->cid_send.name, sizeof(sdmf3));
+
+ len1 = strlen(sdmf1);
+ len2 = strlen(sdmf2);
+diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
+index 37f2899..01ae519 100644
+--- a/drivers/usb/class/cdc-wdm.c
++++ b/drivers/usb/class/cdc-wdm.c
+@@ -52,6 +52,7 @@ MODULE_DEVICE_TABLE (usb, wdm_ids);
+ #define WDM_READ 4
+ #define WDM_INT_STALL 5
+ #define WDM_POLL_RUNNING 6
++#define WDM_OVERFLOW 10
+
+
+ #define WDM_MAX 16
+@@ -115,6 +116,7 @@ static void wdm_in_callback(struct urb *urb)
+ {
+ struct wdm_device *desc = urb->context;
+ int status = urb->status;
++ int length = urb->actual_length;
+
+ spin_lock(&desc->iuspin);
+
+@@ -144,9 +146,17 @@ static void wdm_in_callback(struct urb *urb)
+ }
+
+ desc->rerr = status;
+- desc->reslength = urb->actual_length;
+- memmove(desc->ubuf + desc->length, desc->inbuf, desc->reslength);
+- desc->length += desc->reslength;
++ if (length + desc->length > desc->wMaxCommand) {
++ /* The buffer would overflow */
++ set_bit(WDM_OVERFLOW, &desc->flags);
++ } else {
++ /* we may already be in overflow */
++ if (!test_bit(WDM_OVERFLOW, &desc->flags)) {
++ memmove(desc->ubuf + desc->length, desc->inbuf, length);
++ desc->length += length;
++ desc->reslength = length;
++ }
++ }
+ wake_up(&desc->wait);
+
+ set_bit(WDM_READ, &desc->flags);
+@@ -398,6 +408,11 @@ retry:
+ rv = -ENODEV;
+ goto err;
+ }
++ if (test_bit(WDM_OVERFLOW, &desc->flags)) {
++ clear_bit(WDM_OVERFLOW, &desc->flags);
++ rv = -ENOBUFS;
++ goto err;
++ }
+ i++;
+ if (file->f_flags & O_NONBLOCK) {
+ if (!test_bit(WDM_READ, &desc->flags)) {
+@@ -440,6 +455,7 @@ retry:
+ spin_unlock_irq(&desc->iuspin);
+ goto retry;
+ }
++
+ if (!desc->reslength) { /* zero length read */
+ dev_dbg(&desc->intf->dev, "%s: zero length - clearing WDM_READ\n", __func__);
+ clear_bit(WDM_READ, &desc->flags);
+@@ -844,6 +860,7 @@ static int wdm_post_reset(struct usb_interface *intf)
+ struct wdm_device *desc = usb_get_intfdata(intf);
+ int rv;
+
++ clear_bit(WDM_OVERFLOW, &desc->flags);
+ rv = recover_from_urb_loss(desc);
+ mutex_unlock(&desc->plock);
+ return 0;
+diff --git a/drivers/usb/host/ehci-hcd.c b/drivers/usb/host/ehci-hcd.c
+index 7b2e99c..8d17f780 100644
+--- a/drivers/usb/host/ehci-hcd.c
++++ b/drivers/usb/host/ehci-hcd.c
+@@ -84,7 +84,8 @@ static const char hcd_name [] = "ehci_hcd";
+ #define EHCI_IAA_MSECS 10 /* arbitrary */
+ #define EHCI_IO_JIFFIES (HZ/10) /* io watchdog > irq_thresh */
+ #define EHCI_ASYNC_JIFFIES (HZ/20) /* async idle timeout */
+-#define EHCI_SHRINK_FRAMES 5 /* async qh unlink delay */
++#define EHCI_SHRINK_JIFFIES (DIV_ROUND_UP(HZ, 200) + 1)
++ /* 200-ms async qh unlink delay */
+
+ /* Initial IRQ latency: faster than hw default */
+ static int log2_irq_thresh = 0; // 0 to 6
+@@ -139,10 +140,7 @@ timer_action(struct ehci_hcd *ehci, enum ehci_timer_action action)
+ break;
+ /* case TIMER_ASYNC_SHRINK: */
+ default:
+- /* add a jiffie since we synch against the
+- * 8 KHz uframe counter.
+- */
+- t = DIV_ROUND_UP(EHCI_SHRINK_FRAMES * HZ, 1000) + 1;
++ t = EHCI_SHRINK_JIFFIES;
+ break;
+ }
+ mod_timer(&ehci->watchdog, t + jiffies);
+diff --git a/drivers/usb/host/ehci-q.c b/drivers/usb/host/ehci-q.c
+index 0ee5b4b..3b8fa18 100644
+--- a/drivers/usb/host/ehci-q.c
++++ b/drivers/usb/host/ehci-q.c
+@@ -1204,6 +1204,8 @@ static void start_unlink_async (struct ehci_hcd *ehci, struct ehci_qh *qh)
+
+ prev->hw->hw_next = qh->hw->hw_next;
+ prev->qh_next = qh->qh_next;
++ if (ehci->qh_scan_next == qh)
++ ehci->qh_scan_next = qh->qh_next.qh;
+ wmb ();
+
+ /* If the controller isn't running, we don't have to wait for it */
+@@ -1229,53 +1231,49 @@ static void scan_async (struct ehci_hcd *ehci)
+ struct ehci_qh *qh;
+ enum ehci_timer_action action = TIMER_IO_WATCHDOG;
+
+- ehci->stamp = ehci_readl(ehci, &ehci->regs->frame_index);
+ timer_action_done (ehci, TIMER_ASYNC_SHRINK);
+-rescan:
+ stopped = !HC_IS_RUNNING(ehci_to_hcd(ehci)->state);
+- qh = ehci->async->qh_next.qh;
+- if (likely (qh != NULL)) {
+- do {
+- /* clean any finished work for this qh */
+- if (!list_empty(&qh->qtd_list) && (stopped ||
+- qh->stamp != ehci->stamp)) {
+- int temp;
+-
+- /* unlinks could happen here; completion
+- * reporting drops the lock. rescan using
+- * the latest schedule, but don't rescan
+- * qhs we already finished (no looping)
+- * unless the controller is stopped.
+- */
+- qh = qh_get (qh);
+- qh->stamp = ehci->stamp;
+- temp = qh_completions (ehci, qh);
+- if (qh->needs_rescan)
+- unlink_async(ehci, qh);
+- qh_put (qh);
+- if (temp != 0) {
+- goto rescan;
+- }
+- }
+
+- /* unlink idle entries, reducing DMA usage as well
+- * as HCD schedule-scanning costs. delay for any qh
+- * we just scanned, there's a not-unusual case that it
+- * doesn't stay idle for long.
+- * (plus, avoids some kind of re-activation race.)
++ ehci->qh_scan_next = ehci->async->qh_next.qh;
++ while (ehci->qh_scan_next) {
++ qh = ehci->qh_scan_next;
++ ehci->qh_scan_next = qh->qh_next.qh;
++ rescan:
++ /* clean any finished work for this qh */
++ if (!list_empty(&qh->qtd_list)) {
++ int temp;
++
++ /*
++ * Unlinks could happen here; completion reporting
++ * drops the lock. That's why ehci->qh_scan_next
++ * always holds the next qh to scan; if the next qh
++ * gets unlinked then ehci->qh_scan_next is adjusted
++ * in start_unlink_async().
+ */
+- if (list_empty(&qh->qtd_list)
+- && qh->qh_state == QH_STATE_LINKED) {
+- if (!ehci->reclaim && (stopped ||
+- ((ehci->stamp - qh->stamp) & 0x1fff)
+- >= EHCI_SHRINK_FRAMES * 8))
+- start_unlink_async(ehci, qh);
+- else
+- action = TIMER_ASYNC_SHRINK;
+- }
++ qh = qh_get(qh);
++ temp = qh_completions(ehci, qh);
++ if (qh->needs_rescan)
++ unlink_async(ehci, qh);
++ qh->unlink_time = jiffies + EHCI_SHRINK_JIFFIES;
++ qh_put(qh);
++ if (temp != 0)
++ goto rescan;
++ }
+
+- qh = qh->qh_next.qh;
+- } while (qh);
++ /* unlink idle entries, reducing DMA usage as well
++ * as HCD schedule-scanning costs. delay for any qh
++ * we just scanned, there's a not-unusual case that it
++ * doesn't stay idle for long.
++ * (plus, avoids some kind of re-activation race.)
++ */
++ if (list_empty(&qh->qtd_list)
++ && qh->qh_state == QH_STATE_LINKED) {
++ if (!ehci->reclaim && (stopped ||
++ time_after_eq(jiffies, qh->unlink_time)))
++ start_unlink_async(ehci, qh);
++ else
++ action = TIMER_ASYNC_SHRINK;
++ }
+ }
+ if (action == TIMER_ASYNC_SHRINK)
+ timer_action (ehci, TIMER_ASYNC_SHRINK);
+diff --git a/drivers/usb/host/ehci.h b/drivers/usb/host/ehci.h
+index 5b3ca74..b2b3416 100644
+--- a/drivers/usb/host/ehci.h
++++ b/drivers/usb/host/ehci.h
+@@ -74,6 +74,7 @@ struct ehci_hcd { /* one per controller */
+ /* async schedule support */
+ struct ehci_qh *async;
+ struct ehci_qh *reclaim;
++ struct ehci_qh *qh_scan_next;
+ unsigned scanning : 1;
+
+ /* periodic schedule support */
+@@ -116,7 +117,6 @@ struct ehci_hcd { /* one per controller */
+ struct timer_list iaa_watchdog;
+ struct timer_list watchdog;
+ unsigned long actions;
+- unsigned stamp;
+ unsigned random_frame;
+ unsigned long next_statechange;
+ ktime_t last_periodic_enable;
+@@ -335,6 +335,7 @@ struct ehci_qh {
+ struct ehci_qh *reclaim; /* next to reclaim */
+
+ struct ehci_hcd *ehci;
++ unsigned long unlink_time;
+
+ /*
+ * Do NOT use atomic operations for QH refcounting. On some CPUs
+diff --git a/drivers/usb/host/pci-quirks.c b/drivers/usb/host/pci-quirks.c
+index 981b604..01e7fae 100644
+--- a/drivers/usb/host/pci-quirks.c
++++ b/drivers/usb/host/pci-quirks.c
+@@ -418,12 +418,12 @@ static void __devinit quirk_usb_handoff_xhci(struct pci_dev *pdev)
+ void __iomem *op_reg_base;
+ u32 val;
+ int timeout;
++ int len = pci_resource_len(pdev, 0);
+
+ if (!mmio_resource_enabled(pdev, 0))
+ return;
+
+- base = ioremap_nocache(pci_resource_start(pdev, 0),
+- pci_resource_len(pdev, 0));
++ base = ioremap_nocache(pci_resource_start(pdev, 0), len);
+ if (base == NULL)
+ return;
+
+@@ -433,9 +433,17 @@ static void __devinit quirk_usb_handoff_xhci(struct pci_dev *pdev)
+ */
+ ext_cap_offset = xhci_find_next_cap_offset(base, XHCI_HCC_PARAMS_OFFSET);
+ do {
++ if ((ext_cap_offset + sizeof(val)) > len) {
++ /* We're reading garbage from the controller */
++ dev_warn(&pdev->dev,
++ "xHCI controller failing to respond");
++ return;
++ }
++
+ if (!ext_cap_offset)
+ /* We've reached the end of the extended capabilities */
+ goto hc_init;
++
+ val = readl(base + ext_cap_offset);
+ if (XHCI_EXT_CAPS_ID(val) == XHCI_EXT_CAPS_LEGACY)
+ break;
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index c374beb..dd958e9 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -2364,6 +2364,9 @@ static void ftdi_set_termios(struct tty_struct *tty,
+
+ cflag = termios->c_cflag;
+
++ if (!old_termios)
++ goto no_skip;
++
+ if (old_termios->c_cflag == termios->c_cflag
+ && old_termios->c_ispeed == termios->c_ispeed
+ && old_termios->c_ospeed == termios->c_ospeed)
+@@ -2377,6 +2380,7 @@ static void ftdi_set_termios(struct tty_struct *tty,
+ (termios->c_cflag & (CSIZE|PARODD|PARENB|CMSPAR|CSTOPB)))
+ goto no_data_parity_stop_changes;
+
++no_skip:
+ /* Set number of data bits, parity, stop bits */
+
+ termios->c_cflag &= ~CMSPAR;
+diff --git a/drivers/usb/serial/garmin_gps.c b/drivers/usb/serial/garmin_gps.c
+index 867d97b..7c3ac7b 100644
+--- a/drivers/usb/serial/garmin_gps.c
++++ b/drivers/usb/serial/garmin_gps.c
+@@ -974,10 +974,7 @@ static void garmin_close(struct usb_serial_port *port)
+ if (!serial)
+ return;
+
+- mutex_lock(&port->serial->disc_mutex);
+-
+- if (!port->serial->disconnected)
+- garmin_clear(garmin_data_p);
++ garmin_clear(garmin_data_p);
+
+ /* shutdown our urbs */
+ usb_kill_urb(port->read_urb);
+@@ -986,8 +983,6 @@ static void garmin_close(struct usb_serial_port *port)
+ /* keep reset state so we know that we must start a new session */
+ if (garmin_data_p->state != STATE_RESET)
+ garmin_data_p->state = STATE_DISCONNECTED;
+-
+- mutex_unlock(&port->serial->disc_mutex);
+ }
+
+
+diff --git a/drivers/usb/serial/io_ti.c b/drivers/usb/serial/io_ti.c
+index 14d51e6..cf515f0 100644
+--- a/drivers/usb/serial/io_ti.c
++++ b/drivers/usb/serial/io_ti.c
+@@ -574,6 +574,9 @@ static void chase_port(struct edgeport_port *port, unsigned long timeout,
+ wait_queue_t wait;
+ unsigned long flags;
+
++ if (!tty)
++ return;
++
+ if (!timeout)
+ timeout = (HZ * EDGE_CLOSING_WAIT)/100;
+
+diff --git a/drivers/usb/serial/mos7840.c b/drivers/usb/serial/mos7840.c
+index 61829b8..c802c77 100644
+--- a/drivers/usb/serial/mos7840.c
++++ b/drivers/usb/serial/mos7840.c
+@@ -2569,7 +2569,6 @@ error:
+ kfree(mos7840_port->ctrl_buf);
+ usb_free_urb(mos7840_port->control_urb);
+ kfree(mos7840_port);
+- serial->port[i] = NULL;
+ }
+ return status;
+ }
+@@ -2636,6 +2635,7 @@ static void mos7840_release(struct usb_serial *serial)
+ mos7840_port = mos7840_get_port_private(serial->port[i]);
+ dbg("mos7840_port %d = %p", i, mos7840_port);
+ if (mos7840_port) {
++ usb_free_urb(mos7840_port->control_urb);
+ kfree(mos7840_port->ctrl_buf);
+ kfree(mos7840_port->dr);
+ kfree(mos7840_port);
+diff --git a/drivers/usb/serial/sierra.c b/drivers/usb/serial/sierra.c
+index 1b5c9f8..0cbf847 100644
+--- a/drivers/usb/serial/sierra.c
++++ b/drivers/usb/serial/sierra.c
+@@ -925,6 +925,7 @@ static void sierra_release(struct usb_serial *serial)
+ continue;
+ kfree(portdata);
+ }
++ kfree(serial->private);
+ }
+
+ #ifdef CONFIG_PM
+diff --git a/drivers/usb/serial/whiteheat.c b/drivers/usb/serial/whiteheat.c
+index 1093d2e..1247be1 100644
+--- a/drivers/usb/serial/whiteheat.c
++++ b/drivers/usb/serial/whiteheat.c
+@@ -576,6 +576,7 @@ no_firmware:
+ "%s: please contact support at connecttech.com\n",
+ serial->type->description);
+ kfree(result);
++ kfree(command);
+ return -ENODEV;
+
+ no_command_private:
+diff --git a/drivers/w1/w1.c b/drivers/w1/w1.c
+index acc7e3b..74284bd 100644
+--- a/drivers/w1/w1.c
++++ b/drivers/w1/w1.c
+@@ -918,7 +918,8 @@ void w1_search(struct w1_master *dev, u8 search_type, w1_slave_found_callback cb
+ tmp64 = (triplet_ret >> 2);
+ rn |= (tmp64 << i);
+
+- if (kthread_should_stop()) {
++ /* ensure we're called from kthread and not by netlink callback */
++ if (!dev->priv && kthread_should_stop()) {
+ dev_dbg(&dev->dev, "Abort w1_search\n");
+ return;
+ }
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index a64fde6..c564293 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -1699,30 +1699,19 @@ static int elf_note_info_init(struct elf_note_info *info)
+ return 0;
+ info->psinfo = kmalloc(sizeof(*info->psinfo), GFP_KERNEL);
+ if (!info->psinfo)
+- goto notes_free;
++ return 0;
+ info->prstatus = kmalloc(sizeof(*info->prstatus), GFP_KERNEL);
+ if (!info->prstatus)
+- goto psinfo_free;
++ return 0;
+ info->fpu = kmalloc(sizeof(*info->fpu), GFP_KERNEL);
+ if (!info->fpu)
+- goto prstatus_free;
++ return 0;
+ #ifdef ELF_CORE_COPY_XFPREGS
+ info->xfpu = kmalloc(sizeof(*info->xfpu), GFP_KERNEL);
+ if (!info->xfpu)
+- goto fpu_free;
++ return 0;
+ #endif
+ return 1;
+-#ifdef ELF_CORE_COPY_XFPREGS
+- fpu_free:
+- kfree(info->fpu);
+-#endif
+- prstatus_free:
+- kfree(info->prstatus);
+- psinfo_free:
+- kfree(info->psinfo);
+- notes_free:
+- kfree(info->notes);
+- return 0;
+ }
+
+ static int fill_note_info(struct elfhdr *elf, int phdrs,
+diff --git a/fs/binfmt_em86.c b/fs/binfmt_em86.c
+index 32fb00b..416dcae 100644
+--- a/fs/binfmt_em86.c
++++ b/fs/binfmt_em86.c
+@@ -43,7 +43,6 @@ static int load_em86(struct linux_binprm *bprm,struct pt_regs *regs)
+ return -ENOEXEC;
+ }
+
+- bprm->recursion_depth++; /* Well, the bang-shell is implicit... */
+ allow_write_access(bprm->file);
+ fput(bprm->file);
+ bprm->file = NULL;
+diff --git a/fs/binfmt_misc.c b/fs/binfmt_misc.c
+index 42b60b0..258c5ca 100644
+--- a/fs/binfmt_misc.c
++++ b/fs/binfmt_misc.c
+@@ -116,10 +116,6 @@ static int load_misc_binary(struct linux_binprm *bprm, struct pt_regs *regs)
+ if (!enabled)
+ goto _ret;
+
+- retval = -ENOEXEC;
+- if (bprm->recursion_depth > BINPRM_MAX_RECURSION)
+- goto _ret;
+-
+ /* to keep locking time low, we copy the interpreter string */
+ read_lock(&entries_lock);
+ fmt = check_file(bprm);
+@@ -176,7 +172,10 @@ static int load_misc_binary(struct linux_binprm *bprm, struct pt_regs *regs)
+ goto _error;
+ bprm->argc ++;
+
+- bprm->interp = iname; /* for binfmt_script */
++ /* Update interp in case binfmt_script needs it. */
++ retval = bprm_change_interp(iname, bprm);
++ if (retval < 0)
++ goto _error;
+
+ interp_file = open_exec (iname);
+ retval = PTR_ERR (interp_file);
+@@ -197,8 +196,6 @@ static int load_misc_binary(struct linux_binprm *bprm, struct pt_regs *regs)
+ if (retval < 0)
+ goto _error;
+
+- bprm->recursion_depth++;
+-
+ retval = search_binary_handler (bprm, regs);
+ if (retval < 0)
+ goto _error;
+diff --git a/fs/binfmt_script.c b/fs/binfmt_script.c
+index 0834350..4fe6b8a 100644
+--- a/fs/binfmt_script.c
++++ b/fs/binfmt_script.c
+@@ -22,15 +22,13 @@ static int load_script(struct linux_binprm *bprm,struct pt_regs *regs)
+ char interp[BINPRM_BUF_SIZE];
+ int retval;
+
+- if ((bprm->buf[0] != '#') || (bprm->buf[1] != '!') ||
+- (bprm->recursion_depth > BINPRM_MAX_RECURSION))
++ if ((bprm->buf[0] != '#') || (bprm->buf[1] != '!'))
+ return -ENOEXEC;
+ /*
+ * This section does the #! interpretation.
+ * Sorta complicated, but hopefully it will work. -TYT
+ */
+
+- bprm->recursion_depth++;
+ allow_write_access(bprm->file);
+ fput(bprm->file);
+ bprm->file = NULL;
+@@ -82,7 +80,9 @@ static int load_script(struct linux_binprm *bprm,struct pt_regs *regs)
+ retval = copy_strings_kernel(1, &i_name, bprm);
+ if (retval) return retval;
+ bprm->argc++;
+- bprm->interp = interp;
++ retval = bprm_change_interp(interp, bprm);
++ if (retval < 0)
++ return retval;
+
+ /*
+ * OK, now restart the process with the interpreter's dentry.
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 5d56a8d..6190a10 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -557,6 +557,12 @@ int btrfs_close_devices(struct btrfs_fs_devices *fs_devices)
+ __btrfs_close_devices(fs_devices);
+ free_fs_devices(fs_devices);
+ }
++ /*
++ * Wait for rcu kworkers under __btrfs_close_devices
++ * to finish all blkdev_puts so device is really
++ * free when umount is done.
++ */
++ rcu_barrier();
+ return ret;
+ }
+
+diff --git a/fs/cifs/cifs_dfs_ref.c b/fs/cifs/cifs_dfs_ref.c
+index fea9e89..b36a8aa 100644
+--- a/fs/cifs/cifs_dfs_ref.c
++++ b/fs/cifs/cifs_dfs_ref.c
+@@ -226,6 +226,8 @@ compose_mount_options_out:
+ compose_mount_options_err:
+ kfree(mountdata);
+ mountdata = ERR_PTR(rc);
++ kfree(*devname);
++ *devname = NULL;
+ goto compose_mount_options_out;
+ }
+
+diff --git a/fs/compat_ioctl.c b/fs/compat_ioctl.c
+index d84e705..98d3c58 100644
+--- a/fs/compat_ioctl.c
++++ b/fs/compat_ioctl.c
+@@ -234,6 +234,8 @@ static int do_video_set_spu_palette(unsigned int fd, unsigned int cmd, unsigned
+ up = (struct compat_video_spu_palette __user *) arg;
+ err = get_user(palp, &up->palette);
+ err |= get_user(length, &up->length);
++ if (err)
++ return -EFAULT;
+
+ up_native = compat_alloc_user_space(sizeof(struct video_spu_palette));
+ err = put_user(compat_ptr(palp), &up_native->palette);
+@@ -350,6 +352,7 @@ static int dev_ifconf(unsigned int fd, unsigned int cmd, unsigned long arg)
+ if (copy_from_user(&ifc32, compat_ptr(arg), sizeof(struct ifconf32)))
+ return -EFAULT;
+
++ memset(&ifc, 0, sizeof(ifc));
+ if (ifc32.ifcbuf == 0) {
+ ifc32.ifc_len = 0;
+ ifc.ifc_len = 0;
+diff --git a/fs/eventpoll.c b/fs/eventpoll.c
+index ff57421..83fbd64 100644
+--- a/fs/eventpoll.c
++++ b/fs/eventpoll.c
+@@ -1183,10 +1183,30 @@ static int ep_modify(struct eventpoll *ep, struct epitem *epi, struct epoll_even
+ * otherwise we might miss an event that happens between the
+ * f_op->poll() call and the new event set registering.
+ */
+- epi->event.events = event->events;
++ epi->event.events = event->events; /* need barrier below */
+ epi->event.data = event->data; /* protected by mtx */
+
+ /*
++ * The following barrier has two effects:
++ *
++ * 1) Flush epi changes above to other CPUs. This ensures
++ * we do not miss events from ep_poll_callback if an
++ * event occurs immediately after we call f_op->poll().
++ * We need this because we did not take ep->lock while
++ * changing epi above (but ep_poll_callback does take
++ * ep->lock).
++ *
++ * 2) We also need to ensure we do not miss _past_ events
++ * when calling f_op->poll(). This barrier also
++ * pairs with the barrier in wq_has_sleeper (see
++ * comments for wq_has_sleeper).
++ *
++ * This barrier will now guarantee ep_poll_callback or f_op->poll
++ * (or both) will notice the readiness of an item.
++ */
++ smp_mb();
++
++ /*
+ * Get current event bits. We can safely use the file* here because
+ * its usage count has been increased by the caller of this function.
+ */
+diff --git a/fs/exec.c b/fs/exec.c
+index 86fafc6..feb2435 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -1108,9 +1108,24 @@ void free_bprm(struct linux_binprm *bprm)
+ mutex_unlock(¤t->cred_guard_mutex);
+ abort_creds(bprm->cred);
+ }
++ /* If a binfmt changed the interp, free it. */
++ if (bprm->interp != bprm->filename)
++ kfree(bprm->interp);
+ kfree(bprm);
+ }
+
++int bprm_change_interp(char *interp, struct linux_binprm *bprm)
++{
++ /* If a binfmt changed the interp, free it first. */
++ if (bprm->interp != bprm->filename)
++ kfree(bprm->interp);
++ bprm->interp = kstrdup(interp, GFP_KERNEL);
++ if (!bprm->interp)
++ return -ENOMEM;
++ return 0;
++}
++EXPORT_SYMBOL(bprm_change_interp);
++
+ /*
+ * install the new credentials for this executable
+ */
+@@ -1270,6 +1285,10 @@ int search_binary_handler(struct linux_binprm *bprm,struct pt_regs *regs)
+ int try,retval;
+ struct linux_binfmt *fmt;
+
++ /* This allows 4 levels of binfmt rewrites before failing hard. */
++ if (depth > 5)
++ return -ELOOP;
++
+ retval = security_bprm_check(bprm);
+ if (retval)
+ return retval;
+@@ -1291,12 +1310,8 @@ int search_binary_handler(struct linux_binprm *bprm,struct pt_regs *regs)
+ if (!try_module_get(fmt->module))
+ continue;
+ read_unlock(&binfmt_lock);
++ bprm->recursion_depth = depth + 1;
+ retval = fn(bprm, regs);
+- /*
+- * Restore the depth counter to its starting value
+- * in this call, so we don't have to rely on every
+- * load_binary function to restore it on return.
+- */
+ bprm->recursion_depth = depth;
+ if (retval >= 0) {
+ if (depth == 0)
+diff --git a/fs/ext4/acl.c b/fs/ext4/acl.c
+index 0df88b2..d29a06b 100644
+--- a/fs/ext4/acl.c
++++ b/fs/ext4/acl.c
+@@ -454,8 +454,10 @@ ext4_xattr_set_acl(struct inode *inode, int type, const void *value,
+
+ retry:
+ handle = ext4_journal_start(inode, EXT4_DATA_TRANS_BLOCKS(inode->i_sb));
+- if (IS_ERR(handle))
+- return PTR_ERR(handle);
++ if (IS_ERR(handle)) {
++ error = PTR_ERR(handle);
++ goto release_and_out;
++ }
+ error = ext4_set_acl(handle, inode, type, acl);
+ ext4_journal_stop(handle);
+ if (error == -ENOSPC && ext4_should_retry_alloc(inode->i_sb, &retries))
+diff --git a/fs/ext4/ext4_extents.h b/fs/ext4/ext4_extents.h
+index bdb6ce7..24fa647 100644
+--- a/fs/ext4/ext4_extents.h
++++ b/fs/ext4/ext4_extents.h
+@@ -137,8 +137,11 @@ typedef int (*ext_prepare_callback)(struct inode *, struct ext4_ext_path *,
+ #define EXT_BREAK 1
+ #define EXT_REPEAT 2
+
+-/* Maximum logical block in a file; ext4_extent's ee_block is __le32 */
+-#define EXT_MAX_BLOCK 0xffffffff
++/*
++ * Maximum number of logical blocks in a file; ext4_extent's ee_block is
++ * __le32.
++ */
++#define EXT_MAX_BLOCKS 0xffffffff
+
+ /*
+ * EXT_INIT_MAX_LEN is the maximum number of blocks we can have in an
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index b4402c8..3f022ea 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -62,6 +62,7 @@ ext4_fsblk_t ext_pblock(struct ext4_extent *ex)
+ * idx_pblock:
+ * combine low and high parts of a leaf physical block number into ext4_fsblk_t
+ */
++#define EXT4_EXT_DATA_VALID 0x8 /* extent contains valid data */
+ ext4_fsblk_t idx_pblock(struct ext4_extent_idx *ix)
+ {
+ ext4_fsblk_t block;
+@@ -1331,7 +1332,7 @@ got_index:
+
+ /*
+ * ext4_ext_next_allocated_block:
+- * returns allocated block in subsequent extent or EXT_MAX_BLOCK.
++ * returns allocated block in subsequent extent or EXT_MAX_BLOCKS.
+ * NOTE: it considers block number from index entry as
+ * allocated block. Thus, index entries have to be consistent
+ * with leaves.
+@@ -1345,7 +1346,7 @@ ext4_ext_next_allocated_block(struct ext4_ext_path *path)
+ depth = path->p_depth;
+
+ if (depth == 0 && path->p_ext == NULL)
+- return EXT_MAX_BLOCK;
++ return EXT_MAX_BLOCKS;
+
+ while (depth >= 0) {
+ if (depth == path->p_depth) {
+@@ -1362,12 +1363,12 @@ ext4_ext_next_allocated_block(struct ext4_ext_path *path)
+ depth--;
+ }
+
+- return EXT_MAX_BLOCK;
++ return EXT_MAX_BLOCKS;
+ }
+
+ /*
+ * ext4_ext_next_leaf_block:
+- * returns first allocated block from next leaf or EXT_MAX_BLOCK
++ * returns first allocated block from next leaf or EXT_MAX_BLOCKS
+ */
+ static ext4_lblk_t ext4_ext_next_leaf_block(struct inode *inode,
+ struct ext4_ext_path *path)
+@@ -1379,7 +1380,7 @@ static ext4_lblk_t ext4_ext_next_leaf_block(struct inode *inode,
+
+ /* zero-tree has no leaf blocks at all */
+ if (depth == 0)
+- return EXT_MAX_BLOCK;
++ return EXT_MAX_BLOCKS;
+
+ /* go to index block */
+ depth--;
+@@ -1392,7 +1393,7 @@ static ext4_lblk_t ext4_ext_next_leaf_block(struct inode *inode,
+ depth--;
+ }
+
+- return EXT_MAX_BLOCK;
++ return EXT_MAX_BLOCKS;
+ }
+
+ /*
+@@ -1572,13 +1573,13 @@ unsigned int ext4_ext_check_overlap(struct inode *inode,
+ */
+ if (b2 < b1) {
+ b2 = ext4_ext_next_allocated_block(path);
+- if (b2 == EXT_MAX_BLOCK)
++ if (b2 == EXT_MAX_BLOCKS)
+ goto out;
+ }
+
+ /* check for wrap through zero on extent logical start block*/
+ if (b1 + len1 < b1) {
+- len1 = EXT_MAX_BLOCK - b1;
++ len1 = EXT_MAX_BLOCKS - b1;
+ newext->ee_len = cpu_to_le16(len1);
+ ret = 1;
+ }
+@@ -1654,7 +1655,7 @@ repeat:
+ fex = EXT_LAST_EXTENT(eh);
+ next = ext4_ext_next_leaf_block(inode, path);
+ if (le32_to_cpu(newext->ee_block) > le32_to_cpu(fex->ee_block)
+- && next != EXT_MAX_BLOCK) {
++ && next != EXT_MAX_BLOCKS) {
+ ext_debug("next leaf block - %d\n", next);
+ BUG_ON(npath != NULL);
+ npath = ext4_ext_find_extent(inode, next, NULL);
+@@ -1772,7 +1773,7 @@ int ext4_ext_walk_space(struct inode *inode, ext4_lblk_t block,
+ BUG_ON(func == NULL);
+ BUG_ON(inode == NULL);
+
+- while (block < last && block != EXT_MAX_BLOCK) {
++ while (block < last && block != EXT_MAX_BLOCKS) {
+ num = last - block;
+ /* find extent for this block */
+ down_read(&EXT4_I(inode)->i_data_sem);
+@@ -1900,7 +1901,7 @@ ext4_ext_put_gap_in_cache(struct inode *inode, struct ext4_ext_path *path,
+ if (ex == NULL) {
+ /* there is no extent yet, so gap is [0;-] */
+ lblock = 0;
+- len = EXT_MAX_BLOCK;
++ len = EXT_MAX_BLOCKS;
+ ext_debug("cache gap(whole file):");
+ } else if (block < le32_to_cpu(ex->ee_block)) {
+ lblock = block;
+@@ -2145,8 +2146,8 @@ ext4_ext_rm_leaf(handle_t *handle, struct inode *inode,
+ path[depth].p_ext = ex;
+
+ a = ex_ee_block > start ? ex_ee_block : start;
+- b = ex_ee_block + ex_ee_len - 1 < EXT_MAX_BLOCK ?
+- ex_ee_block + ex_ee_len - 1 : EXT_MAX_BLOCK;
++ b = ex_ee_block + ex_ee_len - 1 < EXT_MAX_BLOCKS ?
++ ex_ee_block + ex_ee_len - 1 : EXT_MAX_BLOCKS;
+
+ ext_debug(" border %u:%u\n", a, b);
+
+@@ -2933,6 +2934,30 @@ static int ext4_split_unwritten_extents(handle_t *handle,
+ ext4_ext_mark_uninitialized(ex3);
+ err = ext4_ext_insert_extent(handle, inode, path, ex3, flags);
+ if (err == -ENOSPC && may_zeroout) {
++ /*
++ * This is different from the upstream, because we
++ * need only a flag to say that the extent contains
++ * the actual data.
++ *
++ * If the extent contains valid data, which can only
++ * happen if AIO races with fallocate, then we got
++ * here from ext4_convert_unwritten_extents_dio().
++ * So we have to be careful not to zeroout valid data
++ * in the extent.
++ *
++ * To avoid it, we only zeroout the ex3 and extend the
++ * extent which is going to become initialized to cover
++ * ex3 as well. and continue as we would if only
++ * split in two was required.
++ */
++ if (flags & EXT4_EXT_DATA_VALID) {
++ err = ext4_ext_zeroout(inode, ex3);
++ if (err)
++ goto fix_extent_len;
++ max_blocks = allocated;
++ ex2->ee_len = cpu_to_le16(max_blocks);
++ goto skip;
++ }
+ err = ext4_ext_zeroout(inode, &orig_ex);
+ if (err)
+ goto fix_extent_len;
+@@ -2978,6 +3003,7 @@ static int ext4_split_unwritten_extents(handle_t *handle,
+
+ allocated = max_blocks;
+ }
++skip:
+ /*
+ * If there was a change of depth as part of the
+ * insertion of ex3 above, we need to update the length
+@@ -3030,11 +3056,16 @@ fix_extent_len:
+ ext4_ext_dirty(handle, inode, path + depth);
+ return err;
+ }
++
+ static int ext4_convert_unwritten_extents_dio(handle_t *handle,
+ struct inode *inode,
++ ext4_lblk_t iblock,
++ unsigned int max_blocks,
+ struct ext4_ext_path *path)
+ {
+ struct ext4_extent *ex;
++ ext4_lblk_t ee_block;
++ unsigned int ee_len;
+ struct ext4_extent_header *eh;
+ int depth;
+ int err = 0;
+@@ -3043,6 +3074,30 @@ static int ext4_convert_unwritten_extents_dio(handle_t *handle,
+ depth = ext_depth(inode);
+ eh = path[depth].p_hdr;
+ ex = path[depth].p_ext;
++ ee_block = le32_to_cpu(ex->ee_block);
++ ee_len = ext4_ext_get_actual_len(ex);
++
++ ext_debug("ext4_convert_unwritten_extents_endio: inode %lu, logical"
++ "block %llu, max_blocks %u\n", inode->i_ino,
++ (unsigned long long)ee_block, ee_len);
++
++ /* If extent is larger than requested then split is required */
++
++ if (ee_block != iblock || ee_len > max_blocks) {
++ err = ext4_split_unwritten_extents(handle, inode, path,
++ iblock, max_blocks,
++ EXT4_EXT_DATA_VALID);
++ if (err < 0)
++ goto out;
++ ext4_ext_drop_refs(path);
++ path = ext4_ext_find_extent(inode, iblock, path);
++ if (IS_ERR(path)) {
++ err = PTR_ERR(path);
++ goto out;
++ }
++ depth = ext_depth(inode);
++ ex = path[depth].p_ext;
++ }
+
+ err = ext4_ext_get_access(handle, inode, path + depth);
+ if (err)
+@@ -3129,7 +3184,8 @@ ext4_ext_handle_uninitialized_extents(handle_t *handle, struct inode *inode,
+ /* async DIO end_io complete, convert the filled extent to written */
+ if (flags == EXT4_GET_BLOCKS_DIO_CONVERT_EXT) {
+ ret = ext4_convert_unwritten_extents_dio(handle, inode,
+- path);
++ iblock, max_blocks,
++ path);
+ if (ret >= 0)
+ ext4_update_inode_fsync_trans(handle, inode, 1);
+ goto out2;
+@@ -3498,6 +3554,12 @@ void ext4_ext_truncate(struct inode *inode)
+ int err = 0;
+
+ /*
++ * finish any pending end_io work so we won't run the risk of
++ * converting any truncated blocks to initialized later
++ */
++ flush_aio_dio_completed_IO(inode);
++
++ /*
+ * probably first extent we're gonna free will be last in block
+ */
+ err = ext4_writepage_trans_blocks(inode);
+@@ -3630,6 +3692,9 @@ long ext4_fallocate(struct inode *inode, int mode, loff_t offset, loff_t len)
+ mutex_unlock(&inode->i_mutex);
+ return ret;
+ }
++
++ /* Prevent race condition between unwritten */
++ flush_aio_dio_completed_IO(inode);
+ retry:
+ while (ret >= 0 && ret < max_blocks) {
+ block = block + ret;
+@@ -3783,15 +3848,14 @@ static int ext4_ext_fiemap_cb(struct inode *inode, struct ext4_ext_path *path,
+ flags |= FIEMAP_EXTENT_UNWRITTEN;
+
+ /*
+- * If this extent reaches EXT_MAX_BLOCK, it must be last.
++ * If this extent reaches EXT_MAX_BLOCKS, it must be last.
+ *
+- * Or if ext4_ext_next_allocated_block is EXT_MAX_BLOCK,
++ * Or if ext4_ext_next_allocated_block is EXT_MAX_BLOCKS,
+ * this also indicates no more allocated blocks.
+ *
+- * XXX this might miss a single-block extent at EXT_MAX_BLOCK
+ */
+- if (ext4_ext_next_allocated_block(path) == EXT_MAX_BLOCK ||
+- newex->ec_block + newex->ec_len - 1 == EXT_MAX_BLOCK) {
++ if (ext4_ext_next_allocated_block(path) == EXT_MAX_BLOCKS ||
++ newex->ec_block + newex->ec_len == EXT_MAX_BLOCKS) {
+ loff_t size = i_size_read(inode);
+ loff_t bs = EXT4_BLOCK_SIZE(inode->i_sb);
+
+@@ -3871,8 +3935,8 @@ int ext4_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+
+ start_blk = start >> inode->i_sb->s_blocksize_bits;
+ last_blk = (start + len - 1) >> inode->i_sb->s_blocksize_bits;
+- if (last_blk >= EXT_MAX_BLOCK)
+- last_blk = EXT_MAX_BLOCK-1;
++ if (last_blk >= EXT_MAX_BLOCKS)
++ last_blk = EXT_MAX_BLOCKS-1;
+ len_blks = ((ext4_lblk_t) last_blk) - start_blk + 1;
+
+ /*
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index efe6363..babf448 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -5121,6 +5121,7 @@ static int ext4_do_update_inode(handle_t *handle,
+ struct ext4_inode_info *ei = EXT4_I(inode);
+ struct buffer_head *bh = iloc->bh;
+ int err = 0, rc, block;
++ int need_datasync = 0;
+
+ /* For fields not not tracking in the in-memory inode,
+ * initialise them to zero for new inodes. */
+@@ -5169,7 +5170,10 @@ static int ext4_do_update_inode(handle_t *handle,
+ raw_inode->i_file_acl_high =
+ cpu_to_le16(ei->i_file_acl >> 32);
+ raw_inode->i_file_acl_lo = cpu_to_le32(ei->i_file_acl);
+- ext4_isize_set(raw_inode, ei->i_disksize);
++ if (ei->i_disksize != ext4_isize(raw_inode)) {
++ ext4_isize_set(raw_inode, ei->i_disksize);
++ need_datasync = 1;
++ }
+ if (ei->i_disksize > 0x7fffffffULL) {
+ struct super_block *sb = inode->i_sb;
+ if (!EXT4_HAS_RO_COMPAT_FEATURE(sb,
+@@ -5222,7 +5226,7 @@ static int ext4_do_update_inode(handle_t *handle,
+ err = rc;
+ ext4_clear_inode_state(inode, EXT4_STATE_NEW);
+
+- ext4_update_inode_fsync_trans(handle, inode, 0);
++ ext4_update_inode_fsync_trans(handle, inode, need_datasync);
+ out_brelse:
+ brelse(bh);
+ ext4_std_error(inode->i_sb, err);
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 42bac1b..cecf2a5 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -2070,7 +2070,11 @@ repeat:
+ group = ac->ac_g_ex.fe_group;
+
+ for (i = 0; i < ngroups; group++, i++) {
+- if (group == ngroups)
++ /*
++ * Artificially restricted ngroups for non-extent
++ * files makes group > ngroups possible on first loop.
++ */
++ if (group >= ngroups)
+ group = 0;
+
+ /* This now checks without needing the buddy page */
+@@ -4163,7 +4167,7 @@ static void ext4_mb_add_n_trim(struct ext4_allocation_context *ac)
+ /* The max size of hash table is PREALLOC_TB_SIZE */
+ order = PREALLOC_TB_SIZE - 1;
+ /* Add the prealloc space to lg */
+- rcu_read_lock();
++ spin_lock(&lg->lg_prealloc_lock);
+ list_for_each_entry_rcu(tmp_pa, &lg->lg_prealloc_list[order],
+ pa_inode_list) {
+ spin_lock(&tmp_pa->pa_lock);
+@@ -4187,12 +4191,12 @@ static void ext4_mb_add_n_trim(struct ext4_allocation_context *ac)
+ if (!added)
+ list_add_tail_rcu(&pa->pa_inode_list,
+ &lg->lg_prealloc_list[order]);
+- rcu_read_unlock();
++ spin_unlock(&lg->lg_prealloc_lock);
+
+ /* Now trim the list to be not more than 8 elements */
+ if (lg_prealloc_count > 8) {
+ ext4_mb_discard_lg_preallocations(sb, lg,
+- order, lg_prealloc_count);
++ order, lg_prealloc_count);
+ return;
+ }
+ return ;
+diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c
+index a73ed78..da25617 100644
+--- a/fs/ext4/move_extent.c
++++ b/fs/ext4/move_extent.c
+@@ -1001,12 +1001,12 @@ mext_check_arguments(struct inode *orig_inode,
+ return -EINVAL;
+ }
+
+- if ((orig_start > EXT_MAX_BLOCK) ||
+- (donor_start > EXT_MAX_BLOCK) ||
+- (*len > EXT_MAX_BLOCK) ||
+- (orig_start + *len > EXT_MAX_BLOCK)) {
++ if ((orig_start >= EXT_MAX_BLOCKS) ||
++ (donor_start >= EXT_MAX_BLOCKS) ||
++ (*len > EXT_MAX_BLOCKS) ||
++ (orig_start + *len >= EXT_MAX_BLOCKS)) {
+ ext4_debug("ext4 move extent: Can't handle over [%u] blocks "
+- "[ino:orig %lu, donor %lu]\n", EXT_MAX_BLOCK,
++ "[ino:orig %lu, donor %lu]\n", EXT_MAX_BLOCKS,
+ orig_inode->i_ino, donor_inode->i_ino);
+ return -EINVAL;
+ }
+@@ -1208,7 +1208,12 @@ ext4_move_extents(struct file *o_filp, struct file *d_filp,
+ orig_inode->i_ino, donor_inode->i_ino);
+ return -EINVAL;
+ }
+-
++ /* TODO: This is non obvious task to swap blocks for inodes with full
++ jornaling enabled */
++ if (ext4_should_journal_data(orig_inode) ||
++ ext4_should_journal_data(donor_inode)) {
++ return -EINVAL;
++ }
+ /* Protect orig and donor inodes against a truncate */
+ ret1 = mext_inode_double_lock(orig_inode, donor_inode);
+ if (ret1 < 0)
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index c3b6ad0..3a1af19 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -1457,10 +1457,22 @@ static int make_indexed_dir(handle_t *handle, struct dentry *dentry,
+ frame->at = entries;
+ frame->bh = bh;
+ bh = bh2;
++
++ ext4_handle_dirty_metadata(handle, dir, frame->bh);
++ ext4_handle_dirty_metadata(handle, dir, bh);
++
+ de = do_split(handle,dir, &bh, frame, &hinfo, &retval);
+- dx_release (frames);
+- if (!(de))
++ if (!de) {
++ /*
++ * Even if the block split failed, we have to properly write
++ * out all the changes we did so far. Otherwise we can end up
++ * with corrupted filesystem.
++ */
++ ext4_mark_inode_dirty(handle, dir);
++ dx_release(frames);
+ return retval;
++ }
++ dx_release(frames);
+
+ retval = add_dirent_to_buf(handle, dentry, inode, de, bh);
+ brelse(bh);
+@@ -1816,9 +1828,7 @@ retry:
+ err = PTR_ERR(inode);
+ if (!IS_ERR(inode)) {
+ init_special_inode(inode, inode->i_mode, rdev);
+-#ifdef CONFIG_EXT4_FS_XATTR
+ inode->i_op = &ext4_special_inode_operations;
+-#endif
+ err = ext4_add_nondir(handle, dentry, inode);
+ }
+ ext4_journal_stop(handle);
+@@ -1991,7 +2001,7 @@ int ext4_orphan_add(handle_t *handle, struct inode *inode)
+ struct ext4_iloc iloc;
+ int err = 0, rc;
+
+- if (!ext4_handle_valid(handle))
++ if (!EXT4_SB(sb)->s_journal)
+ return 0;
+
+ mutex_lock(&EXT4_SB(sb)->s_orphan_lock);
+@@ -2072,8 +2082,8 @@ int ext4_orphan_del(handle_t *handle, struct inode *inode)
+ struct ext4_iloc iloc;
+ int err = 0;
+
+- /* ext4_handle_valid() assumes a valid handle_t pointer */
+- if (handle && !ext4_handle_valid(handle))
++ if ((!EXT4_SB(inode->i_sb)->s_journal) &&
++ !(EXT4_SB(inode->i_sb)->s_mount_state & EXT4_ORPHAN_FS))
+ return 0;
+
+ mutex_lock(&EXT4_SB(inode->i_sb)->s_orphan_lock);
+@@ -2092,7 +2102,7 @@ int ext4_orphan_del(handle_t *handle, struct inode *inode)
+ * transaction handle with which to update the orphan list on
+ * disk, but we still need to remove the inode from the linked
+ * list in memory. */
+- if (sbi->s_journal && !handle)
++ if (!handle)
+ goto out;
+
+ err = ext4_reserve_inode_write(handle, inode, &iloc);
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index f1e7077..108515f 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -1937,7 +1937,9 @@ static void ext4_orphan_cleanup(struct super_block *sb,
+ __func__, inode->i_ino, inode->i_size);
+ jbd_debug(2, "truncating inode %lu to %lld bytes\n",
+ inode->i_ino, inode->i_size);
++ mutex_lock(&inode->i_mutex);
+ ext4_truncate(inode);
++ mutex_unlock(&inode->i_mutex);
+ nr_truncates++;
+ } else {
+ ext4_msg(sb, KERN_DEBUG,
+@@ -1975,6 +1977,12 @@ static void ext4_orphan_cleanup(struct super_block *sb,
+ * in the vfs. ext4 inode has 48 bits of i_block in fsblock units,
+ * so that won't be a limiting factor.
+ *
++ * However there is other limiting factor. We do store extents in the form
++ * of starting block and length, hence the resulting length of the extent
++ * covering maximum file size must fit into on-disk format containers as
++ * well. Given that length is always by 1 unit bigger than max unit (because
++ * we count 0 as well) we have to lower the s_maxbytes by one fs block.
++ *
+ * Note, this does *not* consider any metadata overhead for vfs i_blocks.
+ */
+ static loff_t ext4_max_size(int blkbits, int has_huge_files)
+@@ -1996,10 +2004,13 @@ static loff_t ext4_max_size(int blkbits, int has_huge_files)
+ upper_limit <<= blkbits;
+ }
+
+- /* 32-bit extent-start container, ee_block */
+- res = 1LL << 32;
++ /*
++ * 32-bit extent-start container, ee_block. We lower the maxbytes
++ * by one fs block, so ee_len can cover the extent of maximum file
++ * size
++ */
++ res = (1LL << 32) - 1;
+ res <<= blkbits;
+- res -= 1;
+
+ /* Sanity check against vm- & vfs- imposed limits */
+ if (res > upper_limit)
+diff --git a/fs/fat/inode.c b/fs/fat/inode.c
+index 76b7961..c187e92 100644
+--- a/fs/fat/inode.c
++++ b/fs/fat/inode.c
+@@ -558,7 +558,7 @@ static int fat_statfs(struct dentry *dentry, struct kstatfs *buf)
+ buf->f_bavail = sbi->free_clusters;
+ buf->f_fsid.val[0] = (u32)id;
+ buf->f_fsid.val[1] = (u32)(id >> 32);
+- buf->f_namelen = sbi->options.isvfat ? 260 : 12;
++ buf->f_namelen = sbi->options.isvfat ? FAT_LFN_LEN : 12;
+
+ return 0;
+ }
+diff --git a/fs/fat/namei_vfat.c b/fs/fat/namei_vfat.c
+index 72646e2..4251f35 100644
+--- a/fs/fat/namei_vfat.c
++++ b/fs/fat/namei_vfat.c
+@@ -499,17 +499,18 @@ xlate_to_uni(const unsigned char *name, int len, unsigned char *outname,
+ int charlen;
+
+ if (utf8) {
+- *outlen = utf8s_to_utf16s(name, len, (wchar_t *)outname);
++ *outlen = utf8s_to_utf16s(name, len, UTF16_HOST_ENDIAN,
++ (wchar_t *) outname, FAT_LFN_LEN + 2);
+ if (*outlen < 0)
+ return *outlen;
+- else if (*outlen > 255)
++ else if (*outlen > FAT_LFN_LEN)
+ return -ENAMETOOLONG;
+
+ op = &outname[*outlen * sizeof(wchar_t)];
+ } else {
+ if (nls) {
+ for (i = 0, ip = name, op = outname, *outlen = 0;
+- i < len && *outlen <= 255;
++ i < len && *outlen <= FAT_LFN_LEN;
+ *outlen += 1)
+ {
+ if (escape && (*ip == ':')) {
+@@ -549,7 +550,7 @@ xlate_to_uni(const unsigned char *name, int len, unsigned char *outname,
+ return -ENAMETOOLONG;
+ } else {
+ for (i = 0, ip = name, op = outname, *outlen = 0;
+- i < len && *outlen <= 255;
++ i < len && *outlen <= FAT_LFN_LEN;
+ i++, *outlen += 1)
+ {
+ *op++ = *ip++;
+diff --git a/fs/fscache/stats.c b/fs/fscache/stats.c
+index 46435f3a..4fd7e1c 100644
+--- a/fs/fscache/stats.c
++++ b/fs/fscache/stats.c
+@@ -276,5 +276,5 @@ const struct file_operations fscache_stats_fops = {
+ .open = fscache_stats_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+- .release = seq_release,
++ .release = single_release,
+ };
+diff --git a/fs/hfsplus/extents.c b/fs/hfsplus/extents.c
+index 0022eec..b3d234e 100644
+--- a/fs/hfsplus/extents.c
++++ b/fs/hfsplus/extents.c
+@@ -447,7 +447,7 @@ void hfsplus_file_truncate(struct inode *inode)
+ struct address_space *mapping = inode->i_mapping;
+ struct page *page;
+ void *fsdata;
+- u32 size = inode->i_size;
++ loff_t size = inode->i_size;
+ int res;
+
+ res = pagecache_write_begin(NULL, mapping, size, 0,
+diff --git a/fs/isofs/export.c b/fs/isofs/export.c
+index e81a305..caec670 100644
+--- a/fs/isofs/export.c
++++ b/fs/isofs/export.c
+@@ -131,6 +131,7 @@ isofs_export_encode_fh(struct dentry *dentry,
+ len = 3;
+ fh32[0] = ei->i_iget5_block;
+ fh16[2] = (__u16)ei->i_iget5_offset; /* fh16 [sic] */
++ fh16[3] = 0; /* avoid leaking uninitialized data */
+ fh32[2] = inode->i_generation;
+ if (connectable && !S_ISDIR(inode->i_mode)) {
+ struct inode *parent;
+diff --git a/fs/jbd/commit.c b/fs/jbd/commit.c
+index 17d29a8..1060d48 100644
+--- a/fs/jbd/commit.c
++++ b/fs/jbd/commit.c
+@@ -85,7 +85,12 @@ nope:
+ static void release_data_buffer(struct buffer_head *bh)
+ {
+ if (buffer_freed(bh)) {
++ WARN_ON_ONCE(buffer_dirty(bh));
+ clear_buffer_freed(bh);
++ clear_buffer_mapped(bh);
++ clear_buffer_new(bh);
++ clear_buffer_req(bh);
++ bh->b_bdev = NULL;
+ release_buffer_page(bh);
+ } else
+ put_bh(bh);
+@@ -864,17 +869,35 @@ restart_loop:
+ * there's no point in keeping a checkpoint record for
+ * it. */
+
+- /* A buffer which has been freed while still being
+- * journaled by a previous transaction may end up still
+- * being dirty here, but we want to avoid writing back
+- * that buffer in the future now that the last use has
+- * been committed. That's not only a performance gain,
+- * it also stops aliasing problems if the buffer is left
+- * behind for writeback and gets reallocated for another
+- * use in a different page. */
++ /*
++ * A buffer which has been freed while still being journaled by
++ * a previous transaction.
++ */
+ if (buffer_freed(bh)) {
+- clear_buffer_freed(bh);
+- clear_buffer_jbddirty(bh);
++ /*
++ * If the running transaction is the one containing
++ * "add to orphan" operation (b_next_transaction !=
++ * NULL), we have to wait for that transaction to
++ * commit before we can really get rid of the buffer.
++ * So just clear b_modified to not confuse transaction
++ * credit accounting and refile the buffer to
++ * BJ_Forget of the running transaction. If the just
++ * committed transaction contains "add to orphan"
++ * operation, we can completely invalidate the buffer
++ * now. We are rather throughout in that since the
++ * buffer may be still accessible when blocksize <
++ * pagesize and it is attached to the last partial
++ * page.
++ */
++ jh->b_modified = 0;
++ if (!jh->b_next_transaction) {
++ clear_buffer_freed(bh);
++ clear_buffer_jbddirty(bh);
++ clear_buffer_mapped(bh);
++ clear_buffer_new(bh);
++ clear_buffer_req(bh);
++ bh->b_bdev = NULL;
++ }
+ }
+
+ if (buffer_jbddirty(bh)) {
+diff --git a/fs/jbd/transaction.c b/fs/jbd/transaction.c
+index 006f9ad..1352e60 100644
+--- a/fs/jbd/transaction.c
++++ b/fs/jbd/transaction.c
+@@ -1838,15 +1838,16 @@ static int __dispose_buffer(struct journal_head *jh, transaction_t *transaction)
+ * We're outside-transaction here. Either or both of j_running_transaction
+ * and j_committing_transaction may be NULL.
+ */
+-static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh)
++static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh,
++ int partial_page)
+ {
+ transaction_t *transaction;
+ struct journal_head *jh;
+ int may_free = 1;
+- int ret;
+
+ BUFFER_TRACE(bh, "entry");
+
++retry:
+ /*
+ * It is safe to proceed here without the j_list_lock because the
+ * buffers cannot be stolen by try_to_free_buffers as long as we are
+@@ -1864,6 +1865,29 @@ static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh)
+ if (!jh)
+ goto zap_buffer_no_jh;
+
++ /*
++ * We cannot remove the buffer from checkpoint lists until the
++ * transaction adding inode to orphan list (let's call it T)
++ * is committed. Otherwise if the transaction changing the
++ * buffer would be cleaned from the journal before T is
++ * committed, a crash will cause that the correct contents of
++ * the buffer will be lost. On the other hand we have to
++ * clear the buffer dirty bit at latest at the moment when the
++ * transaction marking the buffer as freed in the filesystem
++ * structures is committed because from that moment on the
++ * block can be reallocated and used by a different page.
++ * Since the block hasn't been freed yet but the inode has
++ * already been added to orphan list, it is safe for us to add
++ * the buffer to BJ_Forget list of the newest transaction.
++ *
++ * Also we have to clear buffer_mapped flag of a truncated buffer
++ * because the buffer_head may be attached to the page straddling
++ * i_size (can happen only when blocksize < pagesize) and thus the
++ * buffer_head can be reused when the file is extended again. So we end
++ * up keeping around invalidated buffers attached to transactions'
++ * BJ_Forget list just to stop checkpointing code from cleaning up
++ * the transaction this buffer was modified in.
++ */
+ transaction = jh->b_transaction;
+ if (transaction == NULL) {
+ /* First case: not on any transaction. If it
+@@ -1889,13 +1913,9 @@ static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh)
+ * committed, the buffer won't be needed any
+ * longer. */
+ JBUFFER_TRACE(jh, "checkpointed: add to BJ_Forget");
+- ret = __dispose_buffer(jh,
++ may_free = __dispose_buffer(jh,
+ journal->j_running_transaction);
+- journal_put_journal_head(jh);
+- spin_unlock(&journal->j_list_lock);
+- jbd_unlock_bh_state(bh);
+- spin_unlock(&journal->j_state_lock);
+- return ret;
++ goto zap_buffer;
+ } else {
+ /* There is no currently-running transaction. So the
+ * orphan record which we wrote for this file must have
+@@ -1903,13 +1923,9 @@ static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh)
+ * the committing transaction, if it exists. */
+ if (journal->j_committing_transaction) {
+ JBUFFER_TRACE(jh, "give to committing trans");
+- ret = __dispose_buffer(jh,
++ may_free = __dispose_buffer(jh,
+ journal->j_committing_transaction);
+- journal_put_journal_head(jh);
+- spin_unlock(&journal->j_list_lock);
+- jbd_unlock_bh_state(bh);
+- spin_unlock(&journal->j_state_lock);
+- return ret;
++ goto zap_buffer;
+ } else {
+ /* The orphan record's transaction has
+ * committed. We can cleanse this buffer */
+@@ -1929,16 +1945,31 @@ static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh)
+ goto zap_buffer;
+ }
+ /*
+- * If it is committing, we simply cannot touch it. We
+- * can remove it's next_transaction pointer from the
+- * running transaction if that is set, but nothing
+- * else. */
+- set_buffer_freed(bh);
+- if (jh->b_next_transaction) {
+- J_ASSERT(jh->b_next_transaction ==
+- journal->j_running_transaction);
+- jh->b_next_transaction = NULL;
++ * The buffer is committing, we simply cannot touch
++ * it. If the page is straddling i_size we have to wait
++ * for commit and try again.
++ */
++ if (partial_page) {
++ tid_t tid = journal->j_committing_transaction->t_tid;
++
++ journal_put_journal_head(jh);
++ spin_unlock(&journal->j_list_lock);
++ jbd_unlock_bh_state(bh);
++ spin_unlock(&journal->j_state_lock);
++ unlock_buffer(bh);
++ log_wait_commit(journal, tid);
++ lock_buffer(bh);
++ goto retry;
+ }
++ /*
++ * OK, buffer won't be reachable after truncate. We just set
++ * j_next_transaction to the running transaction (if there is
++ * one) and mark buffer as freed so that commit code knows it
++ * should clear dirty bits when it is done with the buffer.
++ */
++ set_buffer_freed(bh);
++ if (journal->j_running_transaction && buffer_jbddirty(bh))
++ jh->b_next_transaction = journal->j_running_transaction;
+ journal_put_journal_head(jh);
+ spin_unlock(&journal->j_list_lock);
+ jbd_unlock_bh_state(bh);
+@@ -1957,6 +1988,14 @@ static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh)
+ }
+
+ zap_buffer:
++ /*
++ * This is tricky. Although the buffer is truncated, it may be reused
++ * if blocksize < pagesize and it is attached to the page straddling
++ * EOF. Since the buffer might have been added to BJ_Forget list of the
++ * running transaction, journal_get_write_access() won't clear
++ * b_modified and credit accounting gets confused. So clear b_modified
++ * here. */
++ jh->b_modified = 0;
+ journal_put_journal_head(jh);
+ zap_buffer_no_jh:
+ spin_unlock(&journal->j_list_lock);
+@@ -2005,7 +2044,8 @@ void journal_invalidatepage(journal_t *journal,
+ if (offset <= curr_off) {
+ /* This block is wholly outside the truncation point */
+ lock_buffer(bh);
+- may_free &= journal_unmap_buffer(journal, bh);
++ may_free &= journal_unmap_buffer(journal, bh,
++ offset > 0);
+ unlock_buffer(bh);
+ }
+ curr_off = next_off;
+@@ -2120,7 +2160,7 @@ void journal_file_buffer(struct journal_head *jh,
+ */
+ void __journal_refile_buffer(struct journal_head *jh)
+ {
+- int was_dirty;
++ int was_dirty, jlist;
+ struct buffer_head *bh = jh2bh(jh);
+
+ J_ASSERT_JH(jh, jbd_is_locked_bh_state(bh));
+@@ -2142,8 +2182,13 @@ void __journal_refile_buffer(struct journal_head *jh)
+ __journal_temp_unlink_buffer(jh);
+ jh->b_transaction = jh->b_next_transaction;
+ jh->b_next_transaction = NULL;
+- __journal_file_buffer(jh, jh->b_transaction,
+- jh->b_modified ? BJ_Metadata : BJ_Reserved);
++ if (buffer_freed(bh))
++ jlist = BJ_Forget;
++ else if (jh->b_modified)
++ jlist = BJ_Metadata;
++ else
++ jlist = BJ_Reserved;
++ __journal_file_buffer(jh, jh->b_transaction, jlist);
+ J_ASSERT_JH(jh, jh->b_transaction->t_state == T_RUNNING);
+
+ if (was_dirty)
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index 6d27757..ab87b05 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -2610,11 +2610,16 @@ nfsd4_encode_read(struct nfsd4_compoundres *resp, __be32 nfserr,
+ len = maxcount;
+ v = 0;
+ while (len > 0) {
+- pn = resp->rqstp->rq_resused++;
++ pn = resp->rqstp->rq_resused;
++ if (!resp->rqstp->rq_respages[pn]) { /* ran out of pages */
++ maxcount -= len;
++ break;
++ }
+ resp->rqstp->rq_vec[v].iov_base =
+ page_address(resp->rqstp->rq_respages[pn]);
+ resp->rqstp->rq_vec[v].iov_len =
+ len < PAGE_SIZE ? len : PAGE_SIZE;
++ resp->rqstp->rq_resused++;
+ v++;
+ len -= PAGE_SIZE;
+ }
+@@ -2662,6 +2667,8 @@ nfsd4_encode_readlink(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd
+ return nfserr;
+ if (resp->xbuf->page_len)
+ return nfserr_resource;
++ if (!resp->rqstp->rq_respages[resp->rqstp->rq_resused])
++ return nfserr_resource;
+
+ page = page_address(resp->rqstp->rq_respages[resp->rqstp->rq_resused++]);
+
+@@ -2711,6 +2718,8 @@ nfsd4_encode_readdir(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4
+ return nfserr;
+ if (resp->xbuf->page_len)
+ return nfserr_resource;
++ if (!resp->rqstp->rq_respages[resp->rqstp->rq_resused])
++ return nfserr_resource;
+
+ RESERVE_SPACE(8); /* verifier */
+ savep = p;
+diff --git a/fs/nls/nls_base.c b/fs/nls/nls_base.c
+index 44a88a9..0eb059e 100644
+--- a/fs/nls/nls_base.c
++++ b/fs/nls/nls_base.c
+@@ -114,34 +114,57 @@ int utf32_to_utf8(unicode_t u, u8 *s, int maxlen)
+ }
+ EXPORT_SYMBOL(utf32_to_utf8);
+
+-int utf8s_to_utf16s(const u8 *s, int len, wchar_t *pwcs)
++static inline void put_utf16(wchar_t *s, unsigned c, enum utf16_endian endian)
++{
++ switch (endian) {
++ default:
++ *s = (wchar_t) c;
++ break;
++ case UTF16_LITTLE_ENDIAN:
++ *s = __cpu_to_le16(c);
++ break;
++ case UTF16_BIG_ENDIAN:
++ *s = __cpu_to_be16(c);
++ break;
++ }
++}
++
++int utf8s_to_utf16s(const u8 *s, int len, enum utf16_endian endian,
++ wchar_t *pwcs, int maxlen)
+ {
+ u16 *op;
+ int size;
+ unicode_t u;
+
+ op = pwcs;
+- while (*s && len > 0) {
++ while (len > 0 && maxlen > 0 && *s) {
+ if (*s & 0x80) {
+ size = utf8_to_utf32(s, len, &u);
+ if (size < 0)
+ return -EINVAL;
++ s += size;
++ len -= size;
+
+ if (u >= PLANE_SIZE) {
++ if (maxlen < 2)
++ break;
+ u -= PLANE_SIZE;
+- *op++ = (wchar_t) (SURROGATE_PAIR |
+- ((u >> 10) & SURROGATE_BITS));
+- *op++ = (wchar_t) (SURROGATE_PAIR |
++ put_utf16(op++, SURROGATE_PAIR |
++ ((u >> 10) & SURROGATE_BITS),
++ endian);
++ put_utf16(op++, SURROGATE_PAIR |
+ SURROGATE_LOW |
+- (u & SURROGATE_BITS));
++ (u & SURROGATE_BITS),
++ endian);
++ maxlen -= 2;
+ } else {
+- *op++ = (wchar_t) u;
++ put_utf16(op++, u, endian);
++ maxlen--;
+ }
+- s += size;
+- len -= size;
+ } else {
+- *op++ = *s++;
++ put_utf16(op++, *s++, endian);
+ len--;
++ maxlen--;
+ }
+ }
+ return op - pwcs;
+diff --git a/fs/splice.c b/fs/splice.c
+index bb92b7c5..cdad986 100644
+--- a/fs/splice.c
++++ b/fs/splice.c
+@@ -30,6 +30,7 @@
+ #include <linux/syscalls.h>
+ #include <linux/uio.h>
+ #include <linux/security.h>
++#include <linux/socket.h>
+
+ /*
+ * Attempt to steal a page from a pipe buffer. This should perhaps go into
+@@ -637,7 +638,11 @@ static int pipe_to_sendpage(struct pipe_inode_info *pipe,
+
+ ret = buf->ops->confirm(pipe, buf);
+ if (!ret) {
+- more = (sd->flags & SPLICE_F_MORE) || sd->len < sd->total_len;
++ more = (sd->flags & SPLICE_F_MORE) ? MSG_MORE : 0;
++
++ if (sd->len < sd->total_len && pipe->nrbufs > 1)
++ more |= MSG_SENDPAGE_NOTLAST;
++
+ if (file->f_op && file->f_op->sendpage)
+ ret = file->f_op->sendpage(file, buf->page, buf->offset,
+ sd->len, &pos, more);
+diff --git a/fs/sysfs/dir.c b/fs/sysfs/dir.c
+index e020183..5e7279a 100644
+--- a/fs/sysfs/dir.c
++++ b/fs/sysfs/dir.c
+@@ -440,20 +440,18 @@ int __sysfs_add_one(struct sysfs_addrm_cxt *acxt, struct sysfs_dirent *sd)
+ /**
+ * sysfs_pathname - return full path to sysfs dirent
+ * @sd: sysfs_dirent whose path we want
+- * @path: caller allocated buffer
++ * @path: caller allocated buffer of size PATH_MAX
+ *
+ * Gives the name "/" to the sysfs_root entry; any path returned
+ * is relative to wherever sysfs is mounted.
+- *
+- * XXX: does no error checking on @path size
+ */
+ static char *sysfs_pathname(struct sysfs_dirent *sd, char *path)
+ {
+ if (sd->s_parent) {
+ sysfs_pathname(sd->s_parent, path);
+- strcat(path, "/");
++ strlcat(path, "/", PATH_MAX);
+ }
+- strcat(path, sd->s_name);
++ strlcat(path, sd->s_name, PATH_MAX);
+ return path;
+ }
+
+@@ -486,9 +484,11 @@ int sysfs_add_one(struct sysfs_addrm_cxt *acxt, struct sysfs_dirent *sd)
+ char *path = kzalloc(PATH_MAX, GFP_KERNEL);
+ WARN(1, KERN_WARNING
+ "sysfs: cannot create duplicate filename '%s'\n",
+- (path == NULL) ? sd->s_name :
+- strcat(strcat(sysfs_pathname(acxt->parent_sd, path), "/"),
+- sd->s_name));
++ (path == NULL) ? sd->s_name
++ : (sysfs_pathname(acxt->parent_sd, path),
++ strlcat(path, "/", PATH_MAX),
++ strlcat(path, sd->s_name, PATH_MAX),
++ path));
+ kfree(path);
+ }
+
+diff --git a/fs/udf/inode.c b/fs/udf/inode.c
+index 6d24c2c..3c4ffb2 100644
+--- a/fs/udf/inode.c
++++ b/fs/udf/inode.c
+@@ -648,6 +648,8 @@ static struct buffer_head *inode_getblk(struct inode *inode, sector_t block,
+ goal, err);
+ if (!newblocknum) {
+ brelse(prev_epos.bh);
++ brelse(cur_epos.bh);
++ brelse(next_epos.bh);
+ *err = -ENOSPC;
+ return NULL;
+ }
+@@ -678,6 +680,8 @@ static struct buffer_head *inode_getblk(struct inode *inode, sector_t block,
+ udf_update_extents(inode, laarr, startnum, endnum, &prev_epos);
+
+ brelse(prev_epos.bh);
++ brelse(cur_epos.bh);
++ brelse(next_epos.bh);
+
+ newblock = udf_get_pblock(inode->i_sb, newblocknum,
+ iinfo->i_location.partitionReferenceNum, 0);
+diff --git a/fs/udf/namei.c b/fs/udf/namei.c
+index 21dad8c..b754151 100644
+--- a/fs/udf/namei.c
++++ b/fs/udf/namei.c
+@@ -1331,6 +1331,7 @@ static int udf_encode_fh(struct dentry *de, __u32 *fh, int *lenp,
+ *lenp = 3;
+ fid->udf.block = location.logicalBlockNum;
+ fid->udf.partref = location.partitionReferenceNum;
++ fid->udf.parent_partref = 0;
+ fid->udf.generation = inode->i_generation;
+
+ if (connectable && !S_ISDIR(inode->i_mode)) {
+diff --git a/fs/udf/udf_sb.h b/fs/udf/udf_sb.h
+index d113b72..efa82c9 100644
+--- a/fs/udf/udf_sb.h
++++ b/fs/udf/udf_sb.h
+@@ -78,7 +78,7 @@ struct udf_virtual_data {
+ struct udf_bitmap {
+ __u32 s_extLength;
+ __u32 s_extPosition;
+- __u16 s_nr_groups;
++ int s_nr_groups;
+ struct buffer_head **s_block_bitmap;
+ };
+
+diff --git a/include/asm-generic/signal.h b/include/asm-generic/signal.h
+index 555c0ae..743f7a5 100644
+--- a/include/asm-generic/signal.h
++++ b/include/asm-generic/signal.h
+@@ -99,6 +99,10 @@ typedef unsigned long old_sigset_t;
+
+ #include <asm-generic/signal-defs.h>
+
++#ifdef SA_RESTORER
++#define __ARCH_HAS_SA_RESTORER
++#endif
++
+ struct sigaction {
+ __sighandler_t sa_handler;
+ unsigned long sa_flags;
+diff --git a/include/linux/binfmts.h b/include/linux/binfmts.h
+index a3d802e..9ffffec 100644
+--- a/include/linux/binfmts.h
++++ b/include/linux/binfmts.h
+@@ -71,8 +71,6 @@ extern struct page *get_arg_page(struct linux_binprm *bprm, unsigned long pos,
+ #define BINPRM_FLAGS_EXECFD_BIT 1
+ #define BINPRM_FLAGS_EXECFD (1 << BINPRM_FLAGS_EXECFD_BIT)
+
+-#define BINPRM_MAX_RECURSION 4
+-
+ /*
+ * This structure defines the functions that are used to load the binary formats that
+ * linux accepts.
+@@ -122,6 +120,7 @@ extern int setup_arg_pages(struct linux_binprm * bprm,
+ unsigned long stack_top,
+ int executable_stack);
+ extern int bprm_mm_init(struct linux_binprm *bprm);
++extern int bprm_change_interp(char *interp, struct linux_binprm *bprm);
+ extern int copy_strings_kernel(int argc,char ** argv,struct linux_binprm *bprm);
+ extern int prepare_bprm_creds(struct linux_binprm *bprm);
+ extern void install_exec_creds(struct linux_binprm *bprm);
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 5eb6cb0..ec9c10b 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -456,8 +456,7 @@ struct request_queue
+ #define QUEUE_FLAG_NONROT 14 /* non-rotational device (SSD) */
+ #define QUEUE_FLAG_VIRT QUEUE_FLAG_NONROT /* paravirt device */
+ #define QUEUE_FLAG_IO_STAT 15 /* do IO stats */
+-#define QUEUE_FLAG_CQ 16 /* hardware does queuing */
+-#define QUEUE_FLAG_DISCARD 17 /* supports DISCARD */
++#define QUEUE_FLAG_DISCARD 16 /* supports DISCARD */
+
+ #define QUEUE_FLAG_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \
+ (1 << QUEUE_FLAG_STACKABLE) | \
+@@ -580,7 +579,6 @@ enum {
+
+ #define blk_queue_plugged(q) test_bit(QUEUE_FLAG_PLUGGED, &(q)->queue_flags)
+ #define blk_queue_tagged(q) test_bit(QUEUE_FLAG_QUEUED, &(q)->queue_flags)
+-#define blk_queue_queuing(q) test_bit(QUEUE_FLAG_CQ, &(q)->queue_flags)
+ #define blk_queue_stopped(q) test_bit(QUEUE_FLAG_STOPPED, &(q)->queue_flags)
+ #define blk_queue_nomerges(q) test_bit(QUEUE_FLAG_NOMERGES, &(q)->queue_flags)
+ #define blk_queue_nonrot(q) test_bit(QUEUE_FLAG_NONROT, &(q)->queue_flags)
+diff --git a/include/linux/kmod.h b/include/linux/kmod.h
+index 0546fe7..93e732e 100644
+--- a/include/linux/kmod.h
++++ b/include/linux/kmod.h
+@@ -64,6 +64,8 @@ enum umh_wait {
+ UMH_WAIT_PROC = 1, /* wait for the process to complete */
+ };
+
++#define UMH_KILLABLE 4 /* wait for EXEC/PROC killable */
++
+ /* Actually execute the sub-process */
+ int call_usermodehelper_exec(struct subprocess_info *info, enum umh_wait wait);
+
+diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
+index 085c903..e68b5927 100644
+--- a/include/linux/mempolicy.h
++++ b/include/linux/mempolicy.h
+@@ -180,7 +180,7 @@ struct sp_node {
+
+ struct shared_policy {
+ struct rb_root root;
+- spinlock_t lock;
++ struct mutex mutex;
+ };
+
+ void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol);
+diff --git a/include/linux/msdos_fs.h b/include/linux/msdos_fs.h
+index ce38f1c..34066e6 100644
+--- a/include/linux/msdos_fs.h
++++ b/include/linux/msdos_fs.h
+@@ -15,6 +15,7 @@
+ #define MSDOS_DPB_BITS 4 /* log2(MSDOS_DPB) */
+ #define MSDOS_DPS (SECTOR_SIZE / sizeof(struct msdos_dir_entry))
+ #define MSDOS_DPS_BITS 4 /* log2(MSDOS_DPS) */
++#define MSDOS_LONGNAME 256 /* maximum name length */
+ #define CF_LE_W(v) le16_to_cpu(v)
+ #define CF_LE_L(v) le32_to_cpu(v)
+ #define CT_LE_W(v) cpu_to_le16(v)
+@@ -47,8 +48,8 @@
+ #define DELETED_FLAG 0xe5 /* marks file as deleted when in name[0] */
+ #define IS_FREE(n) (!*(n) || *(n) == DELETED_FLAG)
+
++#define FAT_LFN_LEN 255 /* maximum long name length */
+ #define MSDOS_NAME 11 /* maximum name length */
+-#define MSDOS_LONGNAME 256 /* maximum name length */
+ #define MSDOS_SLOTS 21 /* max # of slots for short and long names */
+ #define MSDOS_DOT ". " /* ".", padded to MSDOS_NAME chars */
+ #define MSDOS_DOTDOT ".. " /* "..", padded to MSDOS_NAME chars */
+diff --git a/include/linux/nls.h b/include/linux/nls.h
+index d47beef..5dc635f 100644
+--- a/include/linux/nls.h
++++ b/include/linux/nls.h
+@@ -43,7 +43,7 @@ enum utf16_endian {
+ UTF16_BIG_ENDIAN
+ };
+
+-/* nls.c */
++/* nls_base.c */
+ extern int register_nls(struct nls_table *);
+ extern int unregister_nls(struct nls_table *);
+ extern struct nls_table *load_nls(char *);
+@@ -52,7 +52,8 @@ extern struct nls_table *load_nls_default(void);
+
+ extern int utf8_to_utf32(const u8 *s, int len, unicode_t *pu);
+ extern int utf32_to_utf8(unicode_t u, u8 *s, int maxlen);
+-extern int utf8s_to_utf16s(const u8 *s, int len, wchar_t *pwcs);
++extern int utf8s_to_utf16s(const u8 *s, int len,
++ enum utf16_endian endian, wchar_t *pwcs, int maxlen);
+ extern int utf16s_to_utf8s(const wchar_t *pwcs, int len,
+ enum utf16_endian endian, u8 *s, int maxlen);
+
+diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
+index 6b202b1..f451772 100644
+--- a/include/linux/page-flags.h
++++ b/include/linux/page-flags.h
+@@ -362,7 +362,7 @@ static inline int PageCompound(struct page *page)
+ * pages on the LRU and/or pagecache.
+ */
+ TESTPAGEFLAG(Compound, compound)
+-__PAGEFLAG(Head, compound)
++__SETPAGEFLAG(Head, compound) __CLEARPAGEFLAG(Head, compound)
+
+ /*
+ * PG_reclaim is used in combination with PG_compound to mark the
+@@ -374,8 +374,14 @@ __PAGEFLAG(Head, compound)
+ * PG_compound & PG_reclaim => Tail page
+ * PG_compound & ~PG_reclaim => Head page
+ */
++#define PG_head_mask ((1L << PG_compound))
+ #define PG_head_tail_mask ((1L << PG_compound) | (1L << PG_reclaim))
+
++static inline int PageHead(struct page *page)
++{
++ return ((page->flags & PG_head_tail_mask) == PG_head_mask);
++}
++
+ static inline int PageTail(struct page *page)
+ {
+ return ((page->flags & PG_head_tail_mask) == PG_head_tail_mask);
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 71849bf..73c3b9b 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -2459,7 +2459,16 @@ static inline void thread_group_cputime_free(struct signal_struct *sig)
+ extern void recalc_sigpending_and_wake(struct task_struct *t);
+ extern void recalc_sigpending(void);
+
+-extern void signal_wake_up(struct task_struct *t, int resume_stopped);
++extern void signal_wake_up_state(struct task_struct *t, unsigned int state);
++
++static inline void signal_wake_up(struct task_struct *t, bool resume)
++{
++ signal_wake_up_state(t, resume ? TASK_WAKEKILL : 0);
++}
++static inline void ptrace_signal_wake_up(struct task_struct *t, bool resume)
++{
++ signal_wake_up_state(t, resume ? __TASK_TRACED : 0);
++}
+
+ /*
+ * Wrappers for p->thread_info->cpu access. No-op on UP.
+diff --git a/include/linux/socket.h b/include/linux/socket.h
+index 3273a0c..3124c51 100644
+--- a/include/linux/socket.h
++++ b/include/linux/socket.h
+@@ -246,7 +246,7 @@ struct ucred {
+ #define MSG_ERRQUEUE 0x2000 /* Fetch message from error queue */
+ #define MSG_NOSIGNAL 0x4000 /* Do not generate SIGPIPE */
+ #define MSG_MORE 0x8000 /* Sender will send more */
+-
++#define MSG_SENDPAGE_NOTLAST 0x20000 /* sendpage() internal : not the last page */
+ #define MSG_EOF MSG_FIN
+
+ #define MSG_CMSG_CLOEXEC 0x40000000 /* Set close_on_exit for file
+diff --git a/include/net/inet_sock.h b/include/net/inet_sock.h
+index 47004f3..cf65e77 100644
+--- a/include/net/inet_sock.h
++++ b/include/net/inet_sock.h
+@@ -56,7 +56,15 @@ struct ip_options {
+ unsigned char __data[0];
+ };
+
+-#define optlength(opt) (sizeof(struct ip_options) + opt->optlen)
++struct ip_options_rcu {
++ struct rcu_head rcu;
++ struct ip_options opt;
++};
++
++struct ip_options_data {
++ struct ip_options_rcu opt;
++ char data[40];
++};
+
+ struct inet_request_sock {
+ struct request_sock req;
+@@ -77,7 +85,7 @@ struct inet_request_sock {
+ acked : 1,
+ no_srccheck: 1;
+ kmemcheck_bitfield_end(flags);
+- struct ip_options *opt;
++ struct ip_options_rcu *opt;
+ };
+
+ static inline struct inet_request_sock *inet_rsk(const struct request_sock *sk)
+@@ -122,7 +130,7 @@ struct inet_sock {
+ __be32 saddr;
+ __s16 uc_ttl;
+ __u16 cmsg_flags;
+- struct ip_options *opt;
++ struct ip_options_rcu *inet_opt;
+ __be16 sport;
+ __u16 id;
+ __u8 tos;
+diff --git a/include/net/ip.h b/include/net/ip.h
+index 69db943..a7d4675 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -54,7 +54,7 @@ struct ipcm_cookie
+ {
+ __be32 addr;
+ int oif;
+- struct ip_options *opt;
++ struct ip_options_rcu *opt;
+ union skb_shared_tx shtx;
+ };
+
+@@ -92,7 +92,7 @@ extern int igmp_mc_proc_init(void);
+
+ extern int ip_build_and_send_pkt(struct sk_buff *skb, struct sock *sk,
+ __be32 saddr, __be32 daddr,
+- struct ip_options *opt);
++ struct ip_options_rcu *opt);
+ extern int ip_rcv(struct sk_buff *skb, struct net_device *dev,
+ struct packet_type *pt, struct net_device *orig_dev);
+ extern int ip_local_deliver(struct sk_buff *skb);
+@@ -362,14 +362,15 @@ extern int ip_forward(struct sk_buff *skb);
+ * Functions provided by ip_options.c
+ */
+
+-extern void ip_options_build(struct sk_buff *skb, struct ip_options *opt, __be32 daddr, struct rtable *rt, int is_frag);
++extern void ip_options_build(struct sk_buff *skb, struct ip_options *opt,
++ __be32 daddr, struct rtable *rt, int is_frag);
+ extern int ip_options_echo(struct ip_options *dopt, struct sk_buff *skb);
+ extern void ip_options_fragment(struct sk_buff *skb);
+ extern int ip_options_compile(struct net *net,
+ struct ip_options *opt, struct sk_buff *skb);
+-extern int ip_options_get(struct net *net, struct ip_options **optp,
++extern int ip_options_get(struct net *net, struct ip_options_rcu **optp,
+ unsigned char *data, int optlen);
+-extern int ip_options_get_from_user(struct net *net, struct ip_options **optp,
++extern int ip_options_get_from_user(struct net *net, struct ip_options_rcu **optp,
+ unsigned char __user *data, int optlen);
+ extern void ip_options_undo(struct ip_options * opt);
+ extern void ip_forward_options(struct sk_buff *skb);
+diff --git a/include/net/ipv6.h b/include/net/ipv6.h
+index 639bbf0..52d86da 100644
+--- a/include/net/ipv6.h
++++ b/include/net/ipv6.h
+@@ -449,17 +449,7 @@ static inline int ipv6_addr_diff(const struct in6_addr *a1, const struct in6_add
+ return __ipv6_addr_diff(a1, a2, sizeof(struct in6_addr));
+ }
+
+-static __inline__ void ipv6_select_ident(struct frag_hdr *fhdr)
+-{
+- static u32 ipv6_fragmentation_id = 1;
+- static DEFINE_SPINLOCK(ip6_id_lock);
+-
+- spin_lock_bh(&ip6_id_lock);
+- fhdr->identification = htonl(ipv6_fragmentation_id);
+- if (++ipv6_fragmentation_id == 0)
+- ipv6_fragmentation_id = 1;
+- spin_unlock_bh(&ip6_id_lock);
+-}
++extern void ipv6_select_ident(struct frag_hdr *fhdr, struct rt6_info *rt);
+
+ /*
+ * Prototypes exported by ipv6
+diff --git a/include/net/transp_v6.h b/include/net/transp_v6.h
+index d65381c..8beefe1 100644
+--- a/include/net/transp_v6.h
++++ b/include/net/transp_v6.h
+@@ -16,6 +16,8 @@ extern struct proto tcpv6_prot;
+
+ struct flowi;
+
++extern void initialize_hashidentrnd(void);
++
+ /* extention headers */
+ extern int ipv6_exthdrs_init(void);
+ extern void ipv6_exthdrs_exit(void);
+diff --git a/include/scsi/scsi.h b/include/scsi/scsi.h
+index 34c46ab..b3cffec 100644
+--- a/include/scsi/scsi.h
++++ b/include/scsi/scsi.h
+@@ -145,10 +145,10 @@ struct scsi_cmnd;
+
+ /* defined in T10 SCSI Primary Commands-2 (SPC2) */
+ struct scsi_varlen_cdb_hdr {
+- u8 opcode; /* opcode always == VARIABLE_LENGTH_CMD */
+- u8 control;
+- u8 misc[5];
+- u8 additional_cdb_length; /* total cdb length - 8 */
++ __u8 opcode; /* opcode always == VARIABLE_LENGTH_CMD */
++ __u8 control;
++ __u8 misc[5];
++ __u8 additional_cdb_length; /* total cdb length - 8 */
+ __be16 service_action;
+ /* service specific data follows */
+ };
+diff --git a/include/scsi/scsi_netlink.h b/include/scsi/scsi_netlink.h
+index 536752c..58ce8fe 100644
+--- a/include/scsi/scsi_netlink.h
++++ b/include/scsi/scsi_netlink.h
+@@ -105,8 +105,8 @@ struct scsi_nl_host_vendor_msg {
+ * PCI : ID data is the 16 bit PCI Registered Vendor ID
+ */
+ #define SCSI_NL_VID_TYPE_SHIFT 56
+-#define SCSI_NL_VID_TYPE_MASK ((u64)0xFF << SCSI_NL_VID_TYPE_SHIFT)
+-#define SCSI_NL_VID_TYPE_PCI ((u64)0x01 << SCSI_NL_VID_TYPE_SHIFT)
++#define SCSI_NL_VID_TYPE_MASK ((__u64)0xFF << SCSI_NL_VID_TYPE_SHIFT)
++#define SCSI_NL_VID_TYPE_PCI ((__u64)0x01 << SCSI_NL_VID_TYPE_SHIFT)
+ #define SCSI_NL_VID_ID_MASK (~ SCSI_NL_VID_TYPE_MASK)
+
+
+diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h
+index eaf46bd..a8dc32a 100644
+--- a/include/trace/events/kmem.h
++++ b/include/trace/events/kmem.h
+@@ -293,7 +293,7 @@ TRACE_EVENT(mm_page_alloc,
+
+ TP_printk("page=%p pfn=%lu order=%d migratetype=%d gfp_flags=%s",
+ __entry->page,
+- page_to_pfn(__entry->page),
++ __entry->page ? page_to_pfn(__entry->page) : 0,
+ __entry->order,
+ __entry->migratetype,
+ show_gfp_flags(__entry->gfp_flags))
+@@ -319,7 +319,7 @@ TRACE_EVENT(mm_page_alloc_zone_locked,
+
+ TP_printk("page=%p pfn=%lu order=%u migratetype=%d percpu_refill=%d",
+ __entry->page,
+- page_to_pfn(__entry->page),
++ __entry->page ? page_to_pfn(__entry->page) : 0,
+ __entry->order,
+ __entry->migratetype,
+ __entry->order == 0)
+diff --git a/kernel/async.c b/kernel/async.c
+index 27235f5..397a7c7 100644
+--- a/kernel/async.c
++++ b/kernel/async.c
+@@ -93,6 +93,13 @@ static async_cookie_t __lowest_in_progress(struct list_head *running)
+ {
+ struct async_entry *entry;
+
++ if (!running) { /* just check the entry count */
++ if (atomic_read(&entry_count))
++ return 0; /* smaller than any cookie */
++ else
++ return next_cookie;
++ }
++
+ if (!list_empty(running)) {
+ entry = list_first_entry(running,
+ struct async_entry, list);
+@@ -248,9 +255,7 @@ EXPORT_SYMBOL_GPL(async_schedule_domain);
+ */
+ void async_synchronize_full(void)
+ {
+- do {
+- async_synchronize_cookie(next_cookie);
+- } while (!list_empty(&async_running) || !list_empty(&async_pending));
++ async_synchronize_cookie_domain(next_cookie, NULL);
+ }
+ EXPORT_SYMBOL_GPL(async_synchronize_full);
+
+@@ -270,7 +275,7 @@ EXPORT_SYMBOL_GPL(async_synchronize_full_domain);
+ /**
+ * async_synchronize_cookie_domain - synchronize asynchronous function calls within a certain domain with cookie checkpointing
+ * @cookie: async_cookie_t to use as checkpoint
+- * @running: running list to synchronize on
++ * @running: running list to synchronize on, NULL indicates all lists
+ *
+ * This function waits until all asynchronous function calls for the
+ * synchronization domain specified by the running list @list submitted
+diff --git a/kernel/cgroup.c b/kernel/cgroup.c
+index 1fbcc74..04a9704 100644
+--- a/kernel/cgroup.c
++++ b/kernel/cgroup.c
+@@ -1992,9 +1992,7 @@ static int cgroup_create_dir(struct cgroup *cgrp, struct dentry *dentry,
+ dentry->d_fsdata = cgrp;
+ inc_nlink(parent->d_inode);
+ rcu_assign_pointer(cgrp->dentry, dentry);
+- dget(dentry);
+ }
+- dput(dentry);
+
+ return error;
+ }
+diff --git a/kernel/kmod.c b/kernel/kmod.c
+index a061472..8ecc509 100644
+--- a/kernel/kmod.c
++++ b/kernel/kmod.c
+@@ -53,6 +53,50 @@ static DECLARE_RWSEM(umhelper_sem);
+ */
+ char modprobe_path[KMOD_PATH_LEN] = "/sbin/modprobe";
+
++static void free_modprobe_argv(char **argv, char **envp)
++{
++ kfree(argv[3]); /* check call_modprobe() */
++ kfree(argv);
++}
++
++static int call_modprobe(char *module_name, int wait)
++{
++ static char *envp[] = { "HOME=/",
++ "TERM=linux",
++ "PATH=/sbin:/usr/sbin:/bin:/usr/bin",
++ NULL };
++ struct subprocess_info *info;
++
++ char **argv = kmalloc(sizeof(char *[5]), GFP_KERNEL);
++ if (!argv)
++ goto out;
++
++ module_name = kstrdup(module_name, GFP_KERNEL);
++ if (!module_name)
++ goto free_argv;
++
++ argv[0] = modprobe_path;
++ argv[1] = "-q";
++ argv[2] = "--";
++ argv[3] = module_name; /* check free_modprobe_argv() */
++ argv[4] = NULL;
++
++ info = call_usermodehelper_setup(argv[0], argv, envp, GFP_ATOMIC);
++ if (!info)
++ goto free_module_name;
++
++ call_usermodehelper_setcleanup(info, free_modprobe_argv);
++
++ return call_usermodehelper_exec(info, wait | UMH_KILLABLE);
++
++free_module_name:
++ kfree(module_name);
++free_argv:
++ kfree(argv);
++out:
++ return -ENOMEM;
++}
++
+ /**
+ * __request_module - try to load a kernel module
+ * @wait: wait (or not) for the operation to complete
+@@ -74,11 +118,6 @@ int __request_module(bool wait, const char *fmt, ...)
+ char module_name[MODULE_NAME_LEN];
+ unsigned int max_modprobes;
+ int ret;
+- char *argv[] = { modprobe_path, "-q", "--", module_name, NULL };
+- static char *envp[] = { "HOME=/",
+- "TERM=linux",
+- "PATH=/sbin:/usr/sbin:/bin:/usr/bin",
+- NULL };
+ static atomic_t kmod_concurrent = ATOMIC_INIT(0);
+ #define MAX_KMOD_CONCURRENT 50 /* Completely arbitrary value - KAO */
+ static int kmod_loop_msg;
+@@ -121,8 +160,8 @@ int __request_module(bool wait, const char *fmt, ...)
+
+ trace_module_request(module_name, wait, _RET_IP_);
+
+- ret = call_usermodehelper(modprobe_path, argv, envp,
+- wait ? UMH_WAIT_PROC : UMH_WAIT_EXEC);
++ ret = call_modprobe(module_name, wait ? UMH_WAIT_PROC : UMH_WAIT_EXEC);
++
+ atomic_dec(&kmod_concurrent);
+ return ret;
+ }
+@@ -193,7 +232,7 @@ static int ____call_usermodehelper(void *data)
+
+ /* Exec failed? */
+ sub_info->retval = retval;
+- do_exit(0);
++ return 0;
+ }
+
+ void call_usermodehelper_freeinfo(struct subprocess_info *info)
+@@ -206,6 +245,19 @@ void call_usermodehelper_freeinfo(struct subprocess_info *info)
+ }
+ EXPORT_SYMBOL(call_usermodehelper_freeinfo);
+
++static void umh_complete(struct subprocess_info *sub_info)
++{
++ struct completion *comp = xchg(&sub_info->complete, NULL);
++ /*
++ * See call_usermodehelper_exec(). If xchg() returns NULL
++ * we own sub_info, the UMH_KILLABLE caller has gone away.
++ */
++ if (comp)
++ complete(comp);
++ else
++ call_usermodehelper_freeinfo(sub_info);
++}
++
+ /* Keventd can't block, but this (a child) can. */
+ static int wait_for_helper(void *data)
+ {
+@@ -245,7 +297,7 @@ static int wait_for_helper(void *data)
+ if (sub_info->wait == UMH_NO_WAIT)
+ call_usermodehelper_freeinfo(sub_info);
+ else
+- complete(sub_info->complete);
++ umh_complete(sub_info);
+ return 0;
+ }
+
+@@ -259,6 +311,9 @@ static void __call_usermodehelper(struct work_struct *work)
+
+ BUG_ON(atomic_read(&sub_info->cred->usage) != 1);
+
++ if (wait != UMH_NO_WAIT)
++ wait &= ~UMH_KILLABLE;
++
+ /* CLONE_VFORK: wait until the usermode helper has execve'd
+ * successfully We need the data structures to stay around
+ * until that is done. */
+@@ -280,7 +335,7 @@ static void __call_usermodehelper(struct work_struct *work)
+ /* FALLTHROUGH */
+
+ case UMH_WAIT_EXEC:
+- complete(sub_info->complete);
++ umh_complete(sub_info);
+ }
+ }
+
+@@ -520,9 +575,21 @@ int call_usermodehelper_exec(struct subprocess_info *sub_info,
+ queue_work(khelper_wq, &sub_info->work);
+ if (wait == UMH_NO_WAIT) /* task has freed sub_info */
+ goto unlock;
++
++ if (wait & UMH_KILLABLE) {
++ retval = wait_for_completion_killable(&done);
++ if (!retval)
++ goto wait_done;
++
++ /* umh_complete() will see NULL and free sub_info */
++ if (xchg(&sub_info->complete, NULL))
++ goto unlock;
++ /* fallthrough, umh_complete() was already called */
++ }
++
+ wait_for_completion(&done);
++wait_done:
+ retval = sub_info->retval;
+-
+ out:
+ call_usermodehelper_freeinfo(sub_info);
+ unlock:
+diff --git a/kernel/posix-cpu-timers.c b/kernel/posix-cpu-timers.c
+index 5c9dc22..ea83f5d 100644
+--- a/kernel/posix-cpu-timers.c
++++ b/kernel/posix-cpu-timers.c
+@@ -1537,8 +1537,10 @@ static int do_cpu_nanosleep(const clockid_t which_clock, int flags,
+ while (!signal_pending(current)) {
+ if (timer.it.cpu.expires.sched == 0) {
+ /*
+- * Our timer fired and was reset.
++ * Our timer fired and was reset, below
++ * deletion can not fail.
+ */
++ posix_cpu_timer_del(&timer);
+ spin_unlock_irq(&timer.it_lock);
+ return 0;
+ }
+@@ -1556,9 +1558,26 @@ static int do_cpu_nanosleep(const clockid_t which_clock, int flags,
+ * We were interrupted by a signal.
+ */
+ sample_to_timespec(which_clock, timer.it.cpu.expires, rqtp);
+- posix_cpu_timer_set(&timer, 0, &zero_it, it);
++ error = posix_cpu_timer_set(&timer, 0, &zero_it, it);
++ if (!error) {
++ /*
++ * Timer is now unarmed, deletion can not fail.
++ */
++ posix_cpu_timer_del(&timer);
++ }
+ spin_unlock_irq(&timer.it_lock);
+
++ while (error == TIMER_RETRY) {
++ /*
++ * We need to handle case when timer was or is in the
++ * middle of firing. In other cases we already freed
++ * resources.
++ */
++ spin_lock_irq(&timer.it_lock);
++ error = posix_cpu_timer_del(&timer);
++ spin_unlock_irq(&timer.it_lock);
++ }
++
+ if ((it->it_value.tv_sec | it->it_value.tv_nsec) == 0) {
+ /*
+ * It actually did fire already.
+diff --git a/kernel/ptrace.c b/kernel/ptrace.c
+index 05625f6..d9c8c47 100644
+--- a/kernel/ptrace.c
++++ b/kernel/ptrace.c
+@@ -56,7 +56,7 @@ static void ptrace_untrace(struct task_struct *child)
+ child->signal->group_stop_count)
+ __set_task_state(child, TASK_STOPPED);
+ else
+- signal_wake_up(child, 1);
++ ptrace_signal_wake_up(child, true);
+ }
+ spin_unlock(&child->sighand->siglock);
+ }
+@@ -80,6 +80,40 @@ void __ptrace_unlink(struct task_struct *child)
+ ptrace_untrace(child);
+ }
+
++/* Ensure that nothing can wake it up, even SIGKILL */
++static bool ptrace_freeze_traced(struct task_struct *task, int kill)
++{
++ bool ret = true;
++
++ spin_lock_irq(&task->sighand->siglock);
++ if (task_is_stopped(task) && !__fatal_signal_pending(task))
++ task->state = __TASK_TRACED;
++ else if (!kill) {
++ if (task_is_traced(task) && !__fatal_signal_pending(task))
++ task->state = __TASK_TRACED;
++ else
++ ret = false;
++ }
++ spin_unlock_irq(&task->sighand->siglock);
++
++ return ret;
++}
++
++static void ptrace_unfreeze_traced(struct task_struct *task)
++{
++ if (task->state != __TASK_TRACED)
++ return;
++
++ WARN_ON(!task->ptrace || task->parent != current);
++
++ spin_lock_irq(&task->sighand->siglock);
++ if (__fatal_signal_pending(task))
++ wake_up_state(task, __TASK_TRACED);
++ else
++ task->state = TASK_TRACED;
++ spin_unlock_irq(&task->sighand->siglock);
++}
++
+ /*
+ * Check that we have indeed attached to the thing..
+ */
+@@ -95,25 +129,29 @@ int ptrace_check_attach(struct task_struct *child, int kill)
+ * be changed by us so it's not changing right after this.
+ */
+ read_lock(&tasklist_lock);
+- if ((child->ptrace & PT_PTRACED) && child->parent == current) {
+- ret = 0;
++ if (child->ptrace && child->parent == current) {
++ WARN_ON(child->state == __TASK_TRACED);
+ /*
+ * child->sighand can't be NULL, release_task()
+ * does ptrace_unlink() before __exit_signal().
+ */
+- spin_lock_irq(&child->sighand->siglock);
+- if (task_is_stopped(child))
+- child->state = TASK_TRACED;
+- else if (!task_is_traced(child) && !kill)
+- ret = -ESRCH;
+- spin_unlock_irq(&child->sighand->siglock);
++ if (ptrace_freeze_traced(child, kill))
++ ret = 0;
+ }
+ read_unlock(&tasklist_lock);
+
+- if (!ret && !kill)
+- ret = wait_task_inactive(child, TASK_TRACED) ? 0 : -ESRCH;
++ if (!ret && !kill) {
++ if (!wait_task_inactive(child, __TASK_TRACED)) {
++ /*
++ * This can only happen if may_ptrace_stop() fails and
++ * ptrace_stop() changes ->state back to TASK_RUNNING,
++ * so we should not worry about leaking __TASK_TRACED.
++ */
++ WARN_ON(child->state == __TASK_TRACED);
++ ret = -ESRCH;
++ }
++ }
+
+- /* All systems go.. */
+ return ret;
+ }
+
+@@ -506,7 +544,7 @@ static int ptrace_resume(struct task_struct *child, long request, long data)
+ }
+
+ child->exit_code = data;
+- wake_up_process(child);
++ wake_up_state(child, __TASK_TRACED);
+
+ return 0;
+ }
+@@ -637,6 +675,8 @@ SYSCALL_DEFINE4(ptrace, long, request, long, pid, long, addr, long, data)
+ goto out_put_task_struct;
+
+ ret = arch_ptrace(child, request, addr, data);
++ if (ret || request != PTRACE_DETACH)
++ ptrace_unfreeze_traced(child);
+
+ out_put_task_struct:
+ put_task_struct(child);
+@@ -752,8 +792,11 @@ asmlinkage long compat_sys_ptrace(compat_long_t request, compat_long_t pid,
+ }
+
+ ret = ptrace_check_attach(child, request == PTRACE_KILL);
+- if (!ret)
++ if (!ret) {
+ ret = compat_arch_ptrace(child, request, addr, data);
++ if (ret || request != PTRACE_DETACH)
++ ptrace_unfreeze_traced(child);
++ }
+
+ out_put_task_struct:
+ put_task_struct(child);
+diff --git a/kernel/resource.c b/kernel/resource.c
+index fb11a58..207915a 100644
+--- a/kernel/resource.c
++++ b/kernel/resource.c
+@@ -533,6 +533,7 @@ static void __init __reserve_region_with_split(struct resource *root,
+ struct resource *parent = root;
+ struct resource *conflict;
+ struct resource *res = kzalloc(sizeof(*res), GFP_ATOMIC);
++ struct resource *next_res = NULL;
+
+ if (!res)
+ return;
+@@ -542,21 +543,46 @@ static void __init __reserve_region_with_split(struct resource *root,
+ res->end = end;
+ res->flags = IORESOURCE_BUSY;
+
+- conflict = __request_resource(parent, res);
+- if (!conflict)
+- return;
++ while (1) {
+
+- /* failed, split and try again */
+- kfree(res);
++ conflict = __request_resource(parent, res);
++ if (!conflict) {
++ if (!next_res)
++ break;
++ res = next_res;
++ next_res = NULL;
++ continue;
++ }
+
+- /* conflict covered whole area */
+- if (conflict->start <= start && conflict->end >= end)
+- return;
++ /* conflict covered whole area */
++ if (conflict->start <= res->start &&
++ conflict->end >= res->end) {
++ kfree(res);
++ WARN_ON(next_res);
++ break;
++ }
++
++ /* failed, split and try again */
++ if (conflict->start > res->start) {
++ end = res->end;
++ res->end = conflict->start - 1;
++ if (conflict->end < end) {
++ next_res = kzalloc(sizeof(*next_res),
++ GFP_ATOMIC);
++ if (!next_res) {
++ kfree(res);
++ break;
++ }
++ next_res->name = name;
++ next_res->start = conflict->end + 1;
++ next_res->end = end;
++ next_res->flags = IORESOURCE_BUSY;
++ }
++ } else {
++ res->start = conflict->end + 1;
++ }
++ }
+
+- if (conflict->start > start)
+- __reserve_region_with_split(root, start, conflict->start-1, name);
+- if (conflict->end < end)
+- __reserve_region_with_split(root, conflict->end+1, end, name);
+ }
+
+ void __init reserve_region_with_split(struct resource *root,
+diff --git a/kernel/sched.c b/kernel/sched.c
+index 0591df8..42bf6a6 100644
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -2618,7 +2618,8 @@ out:
+ */
+ int wake_up_process(struct task_struct *p)
+ {
+- return try_to_wake_up(p, TASK_ALL, 0);
++ WARN_ON(task_is_stopped_or_traced(p));
++ return try_to_wake_up(p, TASK_NORMAL, 0);
+ }
+ EXPORT_SYMBOL(wake_up_process);
+
+diff --git a/kernel/signal.c b/kernel/signal.c
+index 2494827..fb7e242 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -320,6 +320,9 @@ flush_signal_handlers(struct task_struct *t, int force_default)
+ if (force_default || ka->sa.sa_handler != SIG_IGN)
+ ka->sa.sa_handler = SIG_DFL;
+ ka->sa.sa_flags = 0;
++#ifdef __ARCH_HAS_SA_RESTORER
++ ka->sa.sa_restorer = NULL;
++#endif
+ sigemptyset(&ka->sa.sa_mask);
+ ka++;
+ }
+@@ -513,23 +516,17 @@ int dequeue_signal(struct task_struct *tsk, sigset_t *mask, siginfo_t *info)
+ * No need to set need_resched since signal event passing
+ * goes through ->blocked
+ */
+-void signal_wake_up(struct task_struct *t, int resume)
++void signal_wake_up_state(struct task_struct *t, unsigned int state)
+ {
+- unsigned int mask;
+-
+ set_tsk_thread_flag(t, TIF_SIGPENDING);
+-
+ /*
+- * For SIGKILL, we want to wake it up in the stopped/traced/killable
++ * TASK_WAKEKILL also means wake it up in the stopped/traced/killable
+ * case. We don't check t->state here because there is a race with it
+ * executing another processor and just now entering stopped state.
+ * By using wake_up_state, we ensure the process will wake up and
+ * handle its death signal.
+ */
+- mask = TASK_INTERRUPTIBLE;
+- if (resume)
+- mask |= TASK_WAKEKILL;
+- if (!wake_up_state(t, mask))
++ if (!wake_up_state(t, state | TASK_INTERRUPTIBLE))
+ kick_process(t);
+ }
+
+@@ -1530,6 +1527,10 @@ static inline int may_ptrace_stop(void)
+ * If SIGKILL was already sent before the caller unlocked
+ * ->siglock we must see ->core_state != NULL. Otherwise it
+ * is safe to enter schedule().
++ *
++ * This is almost outdated, a task with the pending SIGKILL can't
++ * block in TASK_TRACED. But PTRACE_EVENT_EXIT can be reported
++ * after SIGKILL was already dequeued.
+ */
+ if (unlikely(current->mm->core_state) &&
+ unlikely(current->mm == current->parent->mm))
+@@ -2300,7 +2301,7 @@ do_send_specific(pid_t tgid, pid_t pid, int sig, struct siginfo *info)
+
+ static int do_tkill(pid_t tgid, pid_t pid, int sig)
+ {
+- struct siginfo info;
++ struct siginfo info = {};
+
+ info.si_signo = sig;
+ info.si_errno = 0;
+diff --git a/kernel/softirq.c b/kernel/softirq.c
+index 04a0252..d75c136 100644
+--- a/kernel/softirq.c
++++ b/kernel/softirq.c
+@@ -194,21 +194,21 @@ void local_bh_enable_ip(unsigned long ip)
+ EXPORT_SYMBOL(local_bh_enable_ip);
+
+ /*
+- * We restart softirq processing MAX_SOFTIRQ_RESTART times,
+- * and we fall back to softirqd after that.
++ * We restart softirq processing for at most 2 ms,
++ * and if need_resched() is not set.
+ *
+- * This number has been established via experimentation.
++ * These limits have been established via experimentation.
+ * The two things to balance is latency against fairness -
+ * we want to handle softirqs as soon as possible, but they
+ * should not be able to lock up the box.
+ */
+-#define MAX_SOFTIRQ_RESTART 10
++#define MAX_SOFTIRQ_TIME msecs_to_jiffies(2)
+
+ asmlinkage void __do_softirq(void)
+ {
+ struct softirq_action *h;
+ __u32 pending;
+- int max_restart = MAX_SOFTIRQ_RESTART;
++ unsigned long end = jiffies + MAX_SOFTIRQ_TIME;
+ int cpu;
+
+ pending = local_softirq_pending();
+@@ -253,11 +253,12 @@ restart:
+ local_irq_disable();
+
+ pending = local_softirq_pending();
+- if (pending && --max_restart)
+- goto restart;
++ if (pending) {
++ if (time_before(jiffies, end) && !need_resched())
++ goto restart;
+
+- if (pending)
+ wakeup_softirqd();
++ }
+
+ lockdep_softirq_exit();
+
+diff --git a/kernel/sys.c b/kernel/sys.c
+index e9512b1..5a381e6 100644
+--- a/kernel/sys.c
++++ b/kernel/sys.c
+@@ -303,6 +303,7 @@ void kernel_restart_prepare(char *cmd)
+ void kernel_restart(char *cmd)
+ {
+ kernel_restart_prepare(cmd);
++ disable_nonboot_cpus();
+ if (!cmd)
+ printk(KERN_EMERG "Restarting system.\n");
+ else
+diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
+index 57b953f..67fe3d9 100644
+--- a/kernel/time/tick-broadcast.c
++++ b/kernel/time/tick-broadcast.c
+@@ -67,7 +67,8 @@ static void tick_broadcast_start_periodic(struct clock_event_device *bc)
+ */
+ int tick_check_broadcast_device(struct clock_event_device *dev)
+ {
+- if ((tick_broadcast_device.evtdev &&
++ if ((dev->features & CLOCK_EVT_FEAT_DUMMY) ||
++ (tick_broadcast_device.evtdev &&
+ tick_broadcast_device.evtdev->rating >= dev->rating) ||
+ (dev->features & CLOCK_EVT_FEAT_C3STOP))
+ return 0;
+diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
+index b63cfeb..9f0fd18 100644
+--- a/kernel/time/tick-sched.c
++++ b/kernel/time/tick-sched.c
+@@ -765,7 +765,7 @@ void tick_cancel_sched_timer(int cpu)
+ hrtimer_cancel(&ts->sched_timer);
+ # endif
+
+- ts->nohz_mode = NOHZ_MODE_INACTIVE;
++ memset(ts, 0, sizeof(*ts));
+ }
+ #endif
+
+diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
+index 3d35af3..f65a0fb 100644
+--- a/kernel/time/timekeeping.c
++++ b/kernel/time/timekeeping.c
+@@ -809,7 +809,7 @@ void update_wall_time(void)
+ #endif
+ /* Check if there's really nothing to do */
+ if (offset < timekeeper.cycle_interval)
+- return;
++ goto out;
+
+ timekeeper.xtime_nsec = (s64)xtime.tv_nsec << timekeeper.shift;
+
+@@ -881,6 +881,7 @@ void update_wall_time(void)
+ timekeeper.ntp_error += timekeeper.xtime_nsec <<
+ timekeeper.ntp_error_shift;
+
++out:
+ nsecs = clocksource_cyc2ns(offset, timekeeper.mult, timekeeper.shift);
+ update_xtime_cache(nsecs);
+
+diff --git a/kernel/timer.c b/kernel/timer.c
+index cb3c1f1..8123679 100644
+--- a/kernel/timer.c
++++ b/kernel/timer.c
+@@ -1553,12 +1553,12 @@ static int __cpuinit init_timers_cpu(int cpu)
+ boot_done = 1;
+ base = &boot_tvec_bases;
+ }
++ spin_lock_init(&base->lock);
+ tvec_base_done[cpu] = 1;
+ } else {
+ base = per_cpu(tvec_bases, cpu);
+ }
+
+- spin_lock_init(&base->lock);
+
+ for (j = 0; j < TVN_SIZE; j++) {
+ INIT_LIST_HEAD(base->tv5.vec + j);
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 4872937..c5f8ab9 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -469,7 +469,6 @@ int ftrace_profile_pages_init(struct ftrace_profile_stat *stat)
+ free_page(tmp);
+ }
+
+- free_page((unsigned long)stat->pages);
+ stat->pages = NULL;
+ stat->start = NULL;
+
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index e749a05..6024960 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -2876,6 +2876,8 @@ rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer)
+ * Splice the empty reader page into the list around the head.
+ */
+ reader = rb_set_head_page(cpu_buffer);
++ if (!reader)
++ goto out;
+ cpu_buffer->reader_page->list.next = reader->list.next;
+ cpu_buffer->reader_page->list.prev = reader->list.prev;
+
+diff --git a/lib/genalloc.c b/lib/genalloc.c
+index eed2bdb..c1fb257 100644
+--- a/lib/genalloc.c
++++ b/lib/genalloc.c
+@@ -52,7 +52,7 @@ int gen_pool_add(struct gen_pool *pool, unsigned long addr, size_t size,
+ struct gen_pool_chunk *chunk;
+ int nbits = size >> pool->min_alloc_order;
+ int nbytes = sizeof(struct gen_pool_chunk) +
+- (nbits + BITS_PER_BYTE - 1) / BITS_PER_BYTE;
++ BITS_TO_LONGS(nbits) * sizeof(long);
+
+ chunk = kmalloc_node(nbytes, GFP_KERNEL | __GFP_ZERO, nid);
+ if (unlikely(chunk == NULL))
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 20f9240..b435d1f 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -1772,6 +1772,15 @@ static void hugetlb_vm_op_open(struct vm_area_struct *vma)
+ kref_get(&reservations->refs);
+ }
+
++static void resv_map_put(struct vm_area_struct *vma)
++{
++ struct resv_map *reservations = vma_resv_map(vma);
++
++ if (!reservations)
++ return;
++ kref_put(&reservations->refs, resv_map_release);
++}
++
+ static void hugetlb_vm_op_close(struct vm_area_struct *vma)
+ {
+ struct hstate *h = hstate_vma(vma);
+@@ -1788,7 +1797,7 @@ static void hugetlb_vm_op_close(struct vm_area_struct *vma)
+ reserve = (end - start) -
+ region_count(&reservations->regions, start, end);
+
+- kref_put(&reservations->refs, resv_map_release);
++ resv_map_put(vma);
+
+ if (reserve) {
+ hugetlb_acct_memory(h, -reserve);
+@@ -2472,12 +2481,16 @@ int hugetlb_reserve_pages(struct inode *inode,
+ set_vma_resv_flags(vma, HPAGE_RESV_OWNER);
+ }
+
+- if (chg < 0)
+- return chg;
++ if (chg < 0) {
++ ret = chg;
++ goto out_err;
++ }
+
+ /* There must be enough pages in the subpool for the mapping */
+- if (hugepage_subpool_get_pages(spool, chg))
+- return -ENOSPC;
++ if (hugepage_subpool_get_pages(spool, chg)) {
++ ret = -ENOSPC;
++ goto out_err;
++ }
+
+ /*
+ * Check enough hugepages are available for the reservation.
+@@ -2486,7 +2499,7 @@ int hugetlb_reserve_pages(struct inode *inode,
+ ret = hugetlb_acct_memory(h, chg);
+ if (ret < 0) {
+ hugepage_subpool_put_pages(spool, chg);
+- return ret;
++ goto out_err;
+ }
+
+ /*
+@@ -2503,6 +2516,10 @@ int hugetlb_reserve_pages(struct inode *inode,
+ if (!vma || vma->vm_flags & VM_MAYSHARE)
+ region_add(&inode->i_mapping->private_list, from, to);
+ return 0;
++out_err:
++ if (vma)
++ resv_map_put(vma);
++ return ret;
+ }
+
+ void hugetlb_unreserve_pages(struct inode *inode, long offset, long freed)
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index a6563fb..df6602f 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -1759,7 +1759,7 @@ int __mpol_equal(struct mempolicy *a, struct mempolicy *b)
+ */
+
+ /* lookup first element intersecting start-end */
+-/* Caller holds sp->lock */
++/* Caller holds sp->mutex */
+ static struct sp_node *
+ sp_lookup(struct shared_policy *sp, unsigned long start, unsigned long end)
+ {
+@@ -1823,13 +1823,13 @@ mpol_shared_policy_lookup(struct shared_policy *sp, unsigned long idx)
+
+ if (!sp->root.rb_node)
+ return NULL;
+- spin_lock(&sp->lock);
++ mutex_lock(&sp->mutex);
+ sn = sp_lookup(sp, idx, idx+1);
+ if (sn) {
+ mpol_get(sn->policy);
+ pol = sn->policy;
+ }
+- spin_unlock(&sp->lock);
++ mutex_unlock(&sp->mutex);
+ return pol;
+ }
+
+@@ -1860,10 +1860,10 @@ static struct sp_node *sp_alloc(unsigned long start, unsigned long end,
+ static int shared_policy_replace(struct shared_policy *sp, unsigned long start,
+ unsigned long end, struct sp_node *new)
+ {
+- struct sp_node *n, *new2 = NULL;
++ struct sp_node *n;
++ int ret = 0;
+
+-restart:
+- spin_lock(&sp->lock);
++ mutex_lock(&sp->mutex);
+ n = sp_lookup(sp, start, end);
+ /* Take care of old policies in the same range. */
+ while (n && n->start < end) {
+@@ -1876,16 +1876,14 @@ restart:
+ } else {
+ /* Old policy spanning whole new range. */
+ if (n->end > end) {
++ struct sp_node *new2;
++ new2 = sp_alloc(end, n->end, n->policy);
+ if (!new2) {
+- spin_unlock(&sp->lock);
+- new2 = sp_alloc(end, n->end, n->policy);
+- if (!new2)
+- return -ENOMEM;
+- goto restart;
++ ret = -ENOMEM;
++ goto out;
+ }
+ n->end = start;
+ sp_insert(sp, new2);
+- new2 = NULL;
+ break;
+ } else
+ n->end = start;
+@@ -1896,12 +1894,9 @@ restart:
+ }
+ if (new)
+ sp_insert(sp, new);
+- spin_unlock(&sp->lock);
+- if (new2) {
+- mpol_put(new2->policy);
+- kmem_cache_free(sn_cache, new2);
+- }
+- return 0;
++out:
++ mutex_unlock(&sp->mutex);
++ return ret;
+ }
+
+ /**
+@@ -1919,7 +1914,7 @@ void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol)
+ int ret;
+
+ sp->root = RB_ROOT; /* empty tree == default mempolicy */
+- spin_lock_init(&sp->lock);
++ mutex_init(&sp->mutex);
+
+ if (mpol) {
+ struct vm_area_struct pvma;
+@@ -1987,7 +1982,7 @@ void mpol_free_shared_policy(struct shared_policy *p)
+
+ if (!p->root.rb_node)
+ return;
+- spin_lock(&p->lock);
++ mutex_lock(&p->mutex);
+ next = rb_first(&p->root);
+ while (next) {
+ n = rb_entry(next, struct sp_node, nd);
+@@ -1996,7 +1991,7 @@ void mpol_free_shared_policy(struct shared_policy *p)
+ mpol_put(n->policy);
+ kmem_cache_free(sn_cache, n);
+ }
+- spin_unlock(&p->lock);
++ mutex_unlock(&p->mutex);
+ }
+
+ /* assumes fs == KERNEL_DS */
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 3e0005b..e6a0c72 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -2242,6 +2242,7 @@ static int shmem_remount_fs(struct super_block *sb, int *flags, char *data)
+ unsigned long inodes;
+ int error = -EINVAL;
+
++ config.mpol = NULL;
+ if (shmem_parse_options(data, &config, true))
+ return error;
+
+@@ -2269,8 +2270,13 @@ static int shmem_remount_fs(struct super_block *sb, int *flags, char *data)
+ sbinfo->max_inodes = config.max_inodes;
+ sbinfo->free_inodes = config.max_inodes - inodes;
+
+- mpol_put(sbinfo->mpol);
+- sbinfo->mpol = config.mpol; /* transfers initial ref */
++ /*
++ * Preserve previous mempolicy unless mpol remount option was specified.
++ */
++ if (config.mpol) {
++ mpol_put(sbinfo->mpol);
++ sbinfo->mpol = config.mpol; /* transfers initial ref */
++ }
+ out:
+ spin_unlock(&sbinfo->stat_lock);
+ return error;
+diff --git a/mm/truncate.c b/mm/truncate.c
+index 258bda7..b41d26d 100644
+--- a/mm/truncate.c
++++ b/mm/truncate.c
+@@ -376,11 +376,12 @@ invalidate_complete_page2(struct address_space *mapping, struct page *page)
+ if (page_has_private(page) && !try_to_release_page(page, GFP_KERNEL))
+ return 0;
+
++ clear_page_mlock(page);
++
+ spin_lock_irq(&mapping->tree_lock);
+ if (PageDirty(page))
+ goto failed;
+
+- clear_page_mlock(page);
+ BUG_ON(page_has_private(page));
+ __remove_from_page_cache(page);
+ spin_unlock_irq(&mapping->tree_lock);
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 4649929..738db2b 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -2241,6 +2241,8 @@ static int kswapd(void *p)
+ balance_pgdat(pgdat, order);
+ }
+ }
++
++ current->reclaim_state = NULL;
+ return 0;
+ }
+
+diff --git a/net/atm/common.c b/net/atm/common.c
+index 950bd16..65737b8 100644
+--- a/net/atm/common.c
++++ b/net/atm/common.c
+@@ -473,6 +473,8 @@ int vcc_recvmsg(struct kiocb *iocb, struct socket *sock, struct msghdr *msg,
+ struct sk_buff *skb;
+ int copied, error = -EINVAL;
+
++ msg->msg_namelen = 0;
++
+ if (sock->state != SS_CONNECTED)
+ return -ENOTCONN;
+ if (flags & ~MSG_DONTWAIT) /* only handle MSG_DONTWAIT */
+@@ -749,6 +751,7 @@ int vcc_getsockopt(struct socket *sock, int level, int optname,
+ if (!vcc->dev ||
+ !test_bit(ATM_VF_ADDR,&vcc->flags))
+ return -ENOTCONN;
++ memset(&pvc, 0, sizeof(pvc));
+ pvc.sap_family = AF_ATMPVC;
+ pvc.sap_addr.itf = vcc->dev->number;
+ pvc.sap_addr.vpi = vcc->vpi;
+diff --git a/net/atm/pvc.c b/net/atm/pvc.c
+index d4c0245..523c21a 100644
+--- a/net/atm/pvc.c
++++ b/net/atm/pvc.c
+@@ -93,6 +93,7 @@ static int pvc_getname(struct socket *sock,struct sockaddr *sockaddr,
+ if (!vcc->dev || !test_bit(ATM_VF_ADDR,&vcc->flags)) return -ENOTCONN;
+ *sockaddr_len = sizeof(struct sockaddr_atmpvc);
+ addr = (struct sockaddr_atmpvc *) sockaddr;
++ memset(addr, 0, sizeof(*addr));
+ addr->sap_family = AF_ATMPVC;
+ addr->sap_addr.itf = vcc->dev->number;
+ addr->sap_addr.vpi = vcc->vpi;
+diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c
+index 1e9f3e42..8613bd1 100644
+--- a/net/ax25/af_ax25.c
++++ b/net/ax25/af_ax25.c
+@@ -1654,6 +1654,7 @@ static int ax25_recvmsg(struct kiocb *iocb, struct socket *sock,
+ ax25_address src;
+ const unsigned char *mac = skb_mac_header(skb);
+
++ memset(sax, 0, sizeof(struct full_sockaddr_ax25));
+ ax25_addr_parse(mac + 1, skb->data - mac - 1, &src, NULL,
+ &digi, NULL, NULL);
+ sax->sax25_family = AF_AX25;
+diff --git a/net/bluetooth/af_bluetooth.c b/net/bluetooth/af_bluetooth.c
+index 8cfb5a8..d7239dd 100644
+--- a/net/bluetooth/af_bluetooth.c
++++ b/net/bluetooth/af_bluetooth.c
+@@ -240,14 +240,14 @@ int bt_sock_recvmsg(struct kiocb *iocb, struct socket *sock,
+ if (flags & (MSG_OOB))
+ return -EOPNOTSUPP;
+
++ msg->msg_namelen = 0;
++
+ if (!(skb = skb_recv_datagram(sk, flags, noblock, &err))) {
+ if (sk->sk_shutdown & RCV_SHUTDOWN)
+ return 0;
+ return err;
+ }
+
+- msg->msg_namelen = 0;
+-
+ copied = skb->len;
+ if (len < copied) {
+ msg->msg_flags |= MSG_TRUNC;
+diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c
+index 75302a9..45caaaa 100644
+--- a/net/bluetooth/hci_sock.c
++++ b/net/bluetooth/hci_sock.c
+@@ -576,6 +576,7 @@ static int hci_sock_getsockopt(struct socket *sock, int level, int optname, char
+ {
+ struct hci_filter *f = &hci_pi(sk)->filter;
+
++ memset(&uf, 0, sizeof(uf));
+ uf.type_mask = f->type_mask;
+ uf.opcode = f->opcode;
+ uf.event_mask[0] = *((u32 *) f->event_mask + 0);
+diff --git a/net/bluetooth/hidp/core.c b/net/bluetooth/hidp/core.c
+index 49d8495..0c2c59d 100644
+--- a/net/bluetooth/hidp/core.c
++++ b/net/bluetooth/hidp/core.c
+@@ -778,7 +778,7 @@ static int hidp_setup_hid(struct hidp_session *session,
+ hid->version = req->version;
+ hid->country = req->country;
+
+- strncpy(hid->name, req->name, 128);
++ strncpy(hid->name, req->name, sizeof(req->name) - 1);
+ strncpy(hid->phys, batostr(&src), 64);
+ strncpy(hid->uniq, batostr(&dst), 64);
+
+diff --git a/net/bluetooth/l2cap.c b/net/bluetooth/l2cap.c
+index 71120ee..1c20bd9 100644
+--- a/net/bluetooth/l2cap.c
++++ b/net/bluetooth/l2cap.c
+@@ -1184,6 +1184,7 @@ static int l2cap_sock_getname(struct socket *sock, struct sockaddr *addr, int *l
+
+ BT_DBG("sock %p, sk %p", sock, sk);
+
++ memset(la, 0, sizeof(struct sockaddr_l2));
+ addr->sa_family = AF_BLUETOOTH;
+ *len = sizeof(struct sockaddr_l2);
+
+diff --git a/net/bluetooth/rfcomm/sock.c b/net/bluetooth/rfcomm/sock.c
+index 1ae3f80..1db0132 100644
+--- a/net/bluetooth/rfcomm/sock.c
++++ b/net/bluetooth/rfcomm/sock.c
+@@ -543,6 +543,7 @@ static int rfcomm_sock_getname(struct socket *sock, struct sockaddr *addr, int *
+
+ BT_DBG("sock %p, sk %p", sock, sk);
+
++ memset(sa, 0, sizeof(*sa));
+ sa->rc_family = AF_BLUETOOTH;
+ sa->rc_channel = rfcomm_pi(sk)->channel;
+ if (peer)
+@@ -651,6 +652,7 @@ static int rfcomm_sock_recvmsg(struct kiocb *iocb, struct socket *sock,
+
+ if (test_and_clear_bit(RFCOMM_DEFER_SETUP, &d->flags)) {
+ rfcomm_dlc_accept(d);
++ msg->msg_namelen = 0;
+ return 0;
+ }
+
+diff --git a/net/bridge/br_stp_bpdu.c b/net/bridge/br_stp_bpdu.c
+index 81ae40b..108215b 100644
+--- a/net/bridge/br_stp_bpdu.c
++++ b/net/bridge/br_stp_bpdu.c
+@@ -15,6 +15,7 @@
+ #include <linux/netfilter_bridge.h>
+ #include <linux/etherdevice.h>
+ #include <linux/llc.h>
++#include <linux/pkt_sched.h>
+ #include <net/net_namespace.h>
+ #include <net/llc.h>
+ #include <net/llc_pdu.h>
+@@ -39,6 +40,7 @@ static void br_send_bpdu(struct net_bridge_port *p,
+
+ skb->dev = p->dev;
+ skb->protocol = htons(ETH_P_802_2);
++ skb->priority = TC_PRIO_CONTROL;
+
+ skb_reserve(skb, LLC_RESERVE);
+ memcpy(__skb_put(skb, length), data, length);
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 46e2a29..d775563 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -967,6 +967,8 @@ rollback:
+ */
+ int dev_set_alias(struct net_device *dev, const char *alias, size_t len)
+ {
++ char *new_ifalias;
++
+ ASSERT_RTNL();
+
+ if (len >= IFALIASZ)
+@@ -980,9 +982,10 @@ int dev_set_alias(struct net_device *dev, const char *alias, size_t len)
+ return 0;
+ }
+
+- dev->ifalias = krealloc(dev->ifalias, len + 1, GFP_KERNEL);
+- if (!dev->ifalias)
++ new_ifalias = krealloc(dev->ifalias, len + 1, GFP_KERNEL);
++ if (!new_ifalias)
+ return -ENOMEM;
++ dev->ifalias = new_ifalias;
+
+ strlcpy(dev->ifalias, alias, len+1);
+ return len;
+@@ -2845,7 +2848,7 @@ static void net_rx_action(struct softirq_action *h)
+ * Allow this to run for 2 jiffies since which will allow
+ * an average latency of 1.5/HZ.
+ */
+- if (unlikely(budget <= 0 || time_after(jiffies, time_limit)))
++ if (unlikely(budget <= 0 || time_after_eq(jiffies, time_limit)))
+ goto softnet_break;
+
+ local_irq_enable();
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 4538a34..eafa660 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -562,7 +562,8 @@ set_rcvbuf:
+
+ case SO_KEEPALIVE:
+ #ifdef CONFIG_INET
+- if (sk->sk_protocol == IPPROTO_TCP)
++ if (sk->sk_protocol == IPPROTO_TCP &&
++ sk->sk_type == SOCK_STREAM)
+ tcp_set_keepalive(sk, valbool);
+ #endif
+ sock_valbool_flag(sk, SOCK_KEEPOPEN, valbool);
+diff --git a/net/dcb/dcbnl.c b/net/dcb/dcbnl.c
+index ac1205d..813fe4b 100644
+--- a/net/dcb/dcbnl.c
++++ b/net/dcb/dcbnl.c
+@@ -307,6 +307,7 @@ static int dcbnl_getperm_hwaddr(struct net_device *netdev, struct nlattr **tb,
+ dcb->dcb_family = AF_UNSPEC;
+ dcb->cmd = DCB_CMD_GPERM_HWADDR;
+
++ memset(perm_addr, 0, sizeof(perm_addr));
+ netdev->dcbnl_ops->getpermhwaddr(netdev, perm_addr);
+
+ ret = nla_put(dcbnl_skb, DCB_ATTR_PERM_HWADDR, sizeof(perm_addr),
+diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c
+index d14c0a3..cef3656 100644
+--- a/net/dccp/ipv4.c
++++ b/net/dccp/ipv4.c
+@@ -47,6 +47,7 @@ int dccp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ __be32 daddr, nexthop;
+ int tmp;
+ int err;
++ struct ip_options_rcu *inet_opt;
+
+ dp->dccps_role = DCCP_ROLE_CLIENT;
+
+@@ -57,10 +58,12 @@ int dccp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ return -EAFNOSUPPORT;
+
+ nexthop = daddr = usin->sin_addr.s_addr;
+- if (inet->opt != NULL && inet->opt->srr) {
++
++ inet_opt = inet->inet_opt;
++ if (inet_opt != NULL && inet_opt->opt.srr) {
+ if (daddr == 0)
+ return -EINVAL;
+- nexthop = inet->opt->faddr;
++ nexthop = inet_opt->opt.faddr;
+ }
+
+ tmp = ip_route_connect(&rt, nexthop, inet->saddr,
+@@ -75,7 +78,7 @@ int dccp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ return -ENETUNREACH;
+ }
+
+- if (inet->opt == NULL || !inet->opt->srr)
++ if (inet_opt == NULL || !inet_opt->opt.srr)
+ daddr = rt->rt_dst;
+
+ if (inet->saddr == 0)
+@@ -86,8 +89,8 @@ int dccp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ inet->daddr = daddr;
+
+ inet_csk(sk)->icsk_ext_hdr_len = 0;
+- if (inet->opt != NULL)
+- inet_csk(sk)->icsk_ext_hdr_len = inet->opt->optlen;
++ if (inet_opt)
++ inet_csk(sk)->icsk_ext_hdr_len = inet_opt->opt.optlen;
+ /*
+ * Socket identity is still unknown (sport may be zero).
+ * However we set state to DCCP_REQUESTING and not releasing socket
+@@ -397,7 +400,7 @@ struct sock *dccp_v4_request_recv_sock(struct sock *sk, struct sk_buff *skb,
+ newinet->daddr = ireq->rmt_addr;
+ newinet->rcv_saddr = ireq->loc_addr;
+ newinet->saddr = ireq->loc_addr;
+- newinet->opt = ireq->opt;
++ newinet->inet_opt = ireq->opt;
+ ireq->opt = NULL;
+ newinet->mc_index = inet_iif(skb);
+ newinet->mc_ttl = ip_hdr(skb)->ttl;
+diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c
+index 9ed1962..2f11de7 100644
+--- a/net/dccp/ipv6.c
++++ b/net/dccp/ipv6.c
+@@ -600,7 +600,7 @@ static struct sock *dccp_v6_request_recv_sock(struct sock *sk,
+
+ First: no IPv4 options.
+ */
+- newinet->opt = NULL;
++ newinet->inet_opt = NULL;
+
+ /* Clone RX bits */
+ newnp->rxopt.all = np->rxopt.all;
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index a289878..d1992a4 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -152,7 +152,7 @@ void inet_sock_destruct(struct sock *sk)
+ WARN_ON(sk->sk_wmem_queued);
+ WARN_ON(sk->sk_forward_alloc);
+
+- kfree(inet->opt);
++ kfree(inet->inet_opt);
+ dst_release(sk->sk_dst_cache);
+ sk_refcnt_debug_dec(sk);
+ }
+@@ -1065,9 +1065,11 @@ static int inet_sk_reselect_saddr(struct sock *sk)
+ __be32 old_saddr = inet->saddr;
+ __be32 new_saddr;
+ __be32 daddr = inet->daddr;
++ struct ip_options_rcu *inet_opt;
+
+- if (inet->opt && inet->opt->srr)
+- daddr = inet->opt->faddr;
++ inet_opt = inet->inet_opt;
++ if (inet_opt && inet_opt->opt.srr)
++ daddr = inet_opt->opt.faddr;
+
+ /* Query new route. */
+ err = ip_route_connect(&rt, daddr, 0,
+@@ -1109,6 +1111,7 @@ int inet_sk_rebuild_header(struct sock *sk)
+ struct inet_sock *inet = inet_sk(sk);
+ struct rtable *rt = (struct rtable *)__sk_dst_check(sk, 0);
+ __be32 daddr;
++ struct ip_options_rcu *inet_opt;
+ int err;
+
+ /* Route is OK, nothing to do. */
+@@ -1116,9 +1119,12 @@ int inet_sk_rebuild_header(struct sock *sk)
+ return 0;
+
+ /* Reroute. */
++ rcu_read_lock();
++ inet_opt = rcu_dereference(inet->inet_opt);
+ daddr = inet->daddr;
+- if (inet->opt && inet->opt->srr)
+- daddr = inet->opt->faddr;
++ if (inet_opt && inet_opt->opt.srr)
++ daddr = inet_opt->opt.faddr;
++ rcu_read_unlock();
+ {
+ struct flowi fl = {
+ .oif = sk->sk_bound_dev_if,
+diff --git a/net/ipv4/cipso_ipv4.c b/net/ipv4/cipso_ipv4.c
+index 10f8f8d..b6d06d6 100644
+--- a/net/ipv4/cipso_ipv4.c
++++ b/net/ipv4/cipso_ipv4.c
+@@ -1860,6 +1860,11 @@ static int cipso_v4_genopt(unsigned char *buf, u32 buf_len,
+ return CIPSO_V4_HDR_LEN + ret_val;
+ }
+
++static void opt_kfree_rcu(struct rcu_head *head)
++{
++ kfree(container_of(head, struct ip_options_rcu, rcu));
++}
++
+ /**
+ * cipso_v4_sock_setattr - Add a CIPSO option to a socket
+ * @sk: the socket
+@@ -1882,7 +1887,7 @@ int cipso_v4_sock_setattr(struct sock *sk,
+ unsigned char *buf = NULL;
+ u32 buf_len;
+ u32 opt_len;
+- struct ip_options *opt = NULL;
++ struct ip_options_rcu *old, *opt = NULL;
+ struct inet_sock *sk_inet;
+ struct inet_connection_sock *sk_conn;
+
+@@ -1918,22 +1923,25 @@ int cipso_v4_sock_setattr(struct sock *sk,
+ ret_val = -ENOMEM;
+ goto socket_setattr_failure;
+ }
+- memcpy(opt->__data, buf, buf_len);
+- opt->optlen = opt_len;
+- opt->cipso = sizeof(struct iphdr);
++ memcpy(opt->opt.__data, buf, buf_len);
++ opt->opt.optlen = opt_len;
++ opt->opt.cipso = sizeof(struct iphdr);
+ kfree(buf);
+ buf = NULL;
+
+ sk_inet = inet_sk(sk);
++
++ old = sk_inet->inet_opt;
+ if (sk_inet->is_icsk) {
+ sk_conn = inet_csk(sk);
+- if (sk_inet->opt)
+- sk_conn->icsk_ext_hdr_len -= sk_inet->opt->optlen;
+- sk_conn->icsk_ext_hdr_len += opt->optlen;
++ if (old)
++ sk_conn->icsk_ext_hdr_len -= old->opt.optlen;
++ sk_conn->icsk_ext_hdr_len += opt->opt.optlen;
+ sk_conn->icsk_sync_mss(sk, sk_conn->icsk_pmtu_cookie);
+ }
+- opt = xchg(&sk_inet->opt, opt);
+- kfree(opt);
++ rcu_assign_pointer(sk_inet->inet_opt, opt);
++ if (old)
++ call_rcu(&old->rcu, opt_kfree_rcu);
+
+ return 0;
+
+@@ -1963,7 +1971,7 @@ int cipso_v4_req_setattr(struct request_sock *req,
+ unsigned char *buf = NULL;
+ u32 buf_len;
+ u32 opt_len;
+- struct ip_options *opt = NULL;
++ struct ip_options_rcu *opt = NULL;
+ struct inet_request_sock *req_inet;
+
+ /* We allocate the maximum CIPSO option size here so we are probably
+@@ -1991,15 +1999,16 @@ int cipso_v4_req_setattr(struct request_sock *req,
+ ret_val = -ENOMEM;
+ goto req_setattr_failure;
+ }
+- memcpy(opt->__data, buf, buf_len);
+- opt->optlen = opt_len;
+- opt->cipso = sizeof(struct iphdr);
++ memcpy(opt->opt.__data, buf, buf_len);
++ opt->opt.optlen = opt_len;
++ opt->opt.cipso = sizeof(struct iphdr);
+ kfree(buf);
+ buf = NULL;
+
+ req_inet = inet_rsk(req);
+ opt = xchg(&req_inet->opt, opt);
+- kfree(opt);
++ if (opt)
++ call_rcu(&opt->rcu, opt_kfree_rcu);
+
+ return 0;
+
+@@ -2019,34 +2028,34 @@ req_setattr_failure:
+ * values on failure.
+ *
+ */
+-int cipso_v4_delopt(struct ip_options **opt_ptr)
++int cipso_v4_delopt(struct ip_options_rcu **opt_ptr)
+ {
+ int hdr_delta = 0;
+- struct ip_options *opt = *opt_ptr;
++ struct ip_options_rcu *opt = *opt_ptr;
+
+- if (opt->srr || opt->rr || opt->ts || opt->router_alert) {
++ if (opt->opt.srr || opt->opt.rr || opt->opt.ts || opt->opt.router_alert) {
+ u8 cipso_len;
+ u8 cipso_off;
+ unsigned char *cipso_ptr;
+ int iter;
+ int optlen_new;
+
+- cipso_off = opt->cipso - sizeof(struct iphdr);
+- cipso_ptr = &opt->__data[cipso_off];
++ cipso_off = opt->opt.cipso - sizeof(struct iphdr);
++ cipso_ptr = &opt->opt.__data[cipso_off];
+ cipso_len = cipso_ptr[1];
+
+- if (opt->srr > opt->cipso)
+- opt->srr -= cipso_len;
+- if (opt->rr > opt->cipso)
+- opt->rr -= cipso_len;
+- if (opt->ts > opt->cipso)
+- opt->ts -= cipso_len;
+- if (opt->router_alert > opt->cipso)
+- opt->router_alert -= cipso_len;
+- opt->cipso = 0;
++ if (opt->opt.srr > opt->opt.cipso)
++ opt->opt.srr -= cipso_len;
++ if (opt->opt.rr > opt->opt.cipso)
++ opt->opt.rr -= cipso_len;
++ if (opt->opt.ts > opt->opt.cipso)
++ opt->opt.ts -= cipso_len;
++ if (opt->opt.router_alert > opt->opt.cipso)
++ opt->opt.router_alert -= cipso_len;
++ opt->opt.cipso = 0;
+
+ memmove(cipso_ptr, cipso_ptr + cipso_len,
+- opt->optlen - cipso_off - cipso_len);
++ opt->opt.optlen - cipso_off - cipso_len);
+
+ /* determining the new total option length is tricky because of
+ * the padding necessary, the only thing i can think to do at
+@@ -2055,21 +2064,21 @@ int cipso_v4_delopt(struct ip_options **opt_ptr)
+ * from there we can determine the new total option length */
+ iter = 0;
+ optlen_new = 0;
+- while (iter < opt->optlen)
+- if (opt->__data[iter] != IPOPT_NOP) {
+- iter += opt->__data[iter + 1];
++ while (iter < opt->opt.optlen)
++ if (opt->opt.__data[iter] != IPOPT_NOP) {
++ iter += opt->opt.__data[iter + 1];
+ optlen_new = iter;
+ } else
+ iter++;
+- hdr_delta = opt->optlen;
+- opt->optlen = (optlen_new + 3) & ~3;
+- hdr_delta -= opt->optlen;
++ hdr_delta = opt->opt.optlen;
++ opt->opt.optlen = (optlen_new + 3) & ~3;
++ hdr_delta -= opt->opt.optlen;
+ } else {
+ /* only the cipso option was present on the socket so we can
+ * remove the entire option struct */
+ *opt_ptr = NULL;
+- hdr_delta = opt->optlen;
+- kfree(opt);
++ hdr_delta = opt->opt.optlen;
++ call_rcu(&opt->rcu, opt_kfree_rcu);
+ }
+
+ return hdr_delta;
+@@ -2086,15 +2095,15 @@ int cipso_v4_delopt(struct ip_options **opt_ptr)
+ void cipso_v4_sock_delattr(struct sock *sk)
+ {
+ int hdr_delta;
+- struct ip_options *opt;
++ struct ip_options_rcu *opt;
+ struct inet_sock *sk_inet;
+
+ sk_inet = inet_sk(sk);
+- opt = sk_inet->opt;
+- if (opt == NULL || opt->cipso == 0)
++ opt = sk_inet->inet_opt;
++ if (opt == NULL || opt->opt.cipso == 0)
+ return;
+
+- hdr_delta = cipso_v4_delopt(&sk_inet->opt);
++ hdr_delta = cipso_v4_delopt(&sk_inet->inet_opt);
+ if (sk_inet->is_icsk && hdr_delta > 0) {
+ struct inet_connection_sock *sk_conn = inet_csk(sk);
+ sk_conn->icsk_ext_hdr_len -= hdr_delta;
+@@ -2112,12 +2121,12 @@ void cipso_v4_sock_delattr(struct sock *sk)
+ */
+ void cipso_v4_req_delattr(struct request_sock *req)
+ {
+- struct ip_options *opt;
++ struct ip_options_rcu *opt;
+ struct inet_request_sock *req_inet;
+
+ req_inet = inet_rsk(req);
+ opt = req_inet->opt;
+- if (opt == NULL || opt->cipso == 0)
++ if (opt == NULL || opt->opt.cipso == 0)
+ return;
+
+ cipso_v4_delopt(&req_inet->opt);
+@@ -2187,14 +2196,18 @@ getattr_return:
+ */
+ int cipso_v4_sock_getattr(struct sock *sk, struct netlbl_lsm_secattr *secattr)
+ {
+- struct ip_options *opt;
++ struct ip_options_rcu *opt;
++ int res = -ENOMSG;
+
+- opt = inet_sk(sk)->opt;
+- if (opt == NULL || opt->cipso == 0)
+- return -ENOMSG;
+-
+- return cipso_v4_getattr(opt->__data + opt->cipso - sizeof(struct iphdr),
+- secattr);
++ rcu_read_lock();
++ opt = rcu_dereference(inet_sk(sk)->inet_opt);
++ if (opt && opt->opt.cipso)
++ res = cipso_v4_getattr(opt->opt.__data +
++ opt->opt.cipso -
++ sizeof(struct iphdr),
++ secattr);
++ rcu_read_unlock();
++ return res;
+ }
+
+ /**
+diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
+index 5bc13fe..859d781 100644
+--- a/net/ipv4/icmp.c
++++ b/net/ipv4/icmp.c
+@@ -107,8 +107,7 @@ struct icmp_bxm {
+ __be32 times[3];
+ } data;
+ int head_len;
+- struct ip_options replyopts;
+- unsigned char optbuf[40];
++ struct ip_options_data replyopts;
+ };
+
+ /* An array of errno for error messages from dest unreach. */
+@@ -362,7 +361,7 @@ static void icmp_reply(struct icmp_bxm *icmp_param, struct sk_buff *skb)
+ struct inet_sock *inet;
+ __be32 daddr;
+
+- if (ip_options_echo(&icmp_param->replyopts, skb))
++ if (ip_options_echo(&icmp_param->replyopts.opt.opt, skb))
+ return;
+
+ sk = icmp_xmit_lock(net);
+@@ -376,10 +375,10 @@ static void icmp_reply(struct icmp_bxm *icmp_param, struct sk_buff *skb)
+ daddr = ipc.addr = rt->rt_src;
+ ipc.opt = NULL;
+ ipc.shtx.flags = 0;
+- if (icmp_param->replyopts.optlen) {
+- ipc.opt = &icmp_param->replyopts;
+- if (ipc.opt->srr)
+- daddr = icmp_param->replyopts.faddr;
++ if (icmp_param->replyopts.opt.opt.optlen) {
++ ipc.opt = &icmp_param->replyopts.opt;
++ if (ipc.opt->opt.srr)
++ daddr = icmp_param->replyopts.opt.opt.faddr;
+ }
+ {
+ struct flowi fl = { .nl_u = { .ip4_u =
+@@ -516,7 +515,7 @@ void icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info)
+ IPTOS_PREC_INTERNETCONTROL) :
+ iph->tos;
+
+- if (ip_options_echo(&icmp_param.replyopts, skb_in))
++ if (ip_options_echo(&icmp_param.replyopts.opt.opt, skb_in))
+ goto out_unlock;
+
+
+@@ -532,15 +531,15 @@ void icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info)
+ icmp_param.offset = skb_network_offset(skb_in);
+ inet_sk(sk)->tos = tos;
+ ipc.addr = iph->saddr;
+- ipc.opt = &icmp_param.replyopts;
++ ipc.opt = &icmp_param.replyopts.opt;
+ ipc.shtx.flags = 0;
+
+ {
+ struct flowi fl = {
+ .nl_u = {
+ .ip4_u = {
+- .daddr = icmp_param.replyopts.srr ?
+- icmp_param.replyopts.faddr :
++ .daddr = icmp_param.replyopts.opt.opt.srr ?
++ icmp_param.replyopts.opt.opt.faddr :
+ iph->saddr,
+ .saddr = saddr,
+ .tos = RT_TOS(tos)
+@@ -629,7 +628,7 @@ route_done:
+ room = dst_mtu(&rt->u.dst);
+ if (room > 576)
+ room = 576;
+- room -= sizeof(struct iphdr) + icmp_param.replyopts.optlen;
++ room -= sizeof(struct iphdr) + icmp_param.replyopts.opt.opt.optlen;
+ room -= sizeof(struct icmphdr);
+
+ icmp_param.data_len = skb_in->len - icmp_param.offset;
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index 537731b..a3bf986 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -356,11 +356,11 @@ struct dst_entry *inet_csk_route_req(struct sock *sk,
+ {
+ struct rtable *rt;
+ const struct inet_request_sock *ireq = inet_rsk(req);
+- struct ip_options *opt = inet_rsk(req)->opt;
++ struct ip_options_rcu *opt = inet_rsk(req)->opt;
+ struct flowi fl = { .oif = sk->sk_bound_dev_if,
+ .nl_u = { .ip4_u =
+- { .daddr = ((opt && opt->srr) ?
+- opt->faddr :
++ { .daddr = ((opt && opt->opt.srr) ?
++ opt->opt.faddr :
+ ireq->rmt_addr),
+ .saddr = ireq->loc_addr,
+ .tos = RT_CONN_FLAGS(sk) } },
+@@ -374,7 +374,7 @@ struct dst_entry *inet_csk_route_req(struct sock *sk,
+ security_req_classify_flow(req, &fl);
+ if (ip_route_output_flow(net, &rt, &fl, sk, 0))
+ goto no_route;
+- if (opt && opt->is_strictroute && rt->rt_dst != rt->rt_gateway)
++ if (opt && opt->opt.is_strictroute && rt->rt_dst != rt->rt_gateway)
+ goto route_err;
+ return &rt->u.dst;
+
+diff --git a/net/ipv4/ip_options.c b/net/ipv4/ip_options.c
+index 94bf105..8a95972 100644
+--- a/net/ipv4/ip_options.c
++++ b/net/ipv4/ip_options.c
+@@ -35,7 +35,7 @@
+ * saddr is address of outgoing interface.
+ */
+
+-void ip_options_build(struct sk_buff * skb, struct ip_options * opt,
++void ip_options_build(struct sk_buff *skb, struct ip_options *opt,
+ __be32 daddr, struct rtable *rt, int is_frag)
+ {
+ unsigned char *iph = skb_network_header(skb);
+@@ -82,9 +82,9 @@ void ip_options_build(struct sk_buff * skb, struct ip_options * opt,
+ * NOTE: dopt cannot point to skb.
+ */
+
+-int ip_options_echo(struct ip_options * dopt, struct sk_buff * skb)
++int ip_options_echo(struct ip_options *dopt, struct sk_buff *skb)
+ {
+- struct ip_options *sopt;
++ const struct ip_options *sopt;
+ unsigned char *sptr, *dptr;
+ int soffset, doffset;
+ int optlen;
+@@ -94,10 +94,8 @@ int ip_options_echo(struct ip_options * dopt, struct sk_buff * skb)
+
+ sopt = &(IPCB(skb)->opt);
+
+- if (sopt->optlen == 0) {
+- dopt->optlen = 0;
++ if (sopt->optlen == 0)
+ return 0;
+- }
+
+ sptr = skb_network_header(skb);
+ dptr = dopt->__data;
+@@ -156,7 +154,7 @@ int ip_options_echo(struct ip_options * dopt, struct sk_buff * skb)
+ dopt->optlen += optlen;
+ }
+ if (sopt->srr) {
+- unsigned char * start = sptr+sopt->srr;
++ unsigned char *start = sptr+sopt->srr;
+ __be32 faddr;
+
+ optlen = start[1];
+@@ -499,19 +497,19 @@ void ip_options_undo(struct ip_options * opt)
+ }
+ }
+
+-static struct ip_options *ip_options_get_alloc(const int optlen)
++static struct ip_options_rcu *ip_options_get_alloc(const int optlen)
+ {
+- return kzalloc(sizeof(struct ip_options) + ((optlen + 3) & ~3),
++ return kzalloc(sizeof(struct ip_options_rcu) + ((optlen + 3) & ~3),
+ GFP_KERNEL);
+ }
+
+-static int ip_options_get_finish(struct net *net, struct ip_options **optp,
+- struct ip_options *opt, int optlen)
++static int ip_options_get_finish(struct net *net, struct ip_options_rcu **optp,
++ struct ip_options_rcu *opt, int optlen)
+ {
+ while (optlen & 3)
+- opt->__data[optlen++] = IPOPT_END;
+- opt->optlen = optlen;
+- if (optlen && ip_options_compile(net, opt, NULL)) {
++ opt->opt.__data[optlen++] = IPOPT_END;
++ opt->opt.optlen = optlen;
++ if (optlen && ip_options_compile(net, &opt->opt, NULL)) {
+ kfree(opt);
+ return -EINVAL;
+ }
+@@ -520,29 +518,29 @@ static int ip_options_get_finish(struct net *net, struct ip_options **optp,
+ return 0;
+ }
+
+-int ip_options_get_from_user(struct net *net, struct ip_options **optp,
++int ip_options_get_from_user(struct net *net, struct ip_options_rcu **optp,
+ unsigned char __user *data, int optlen)
+ {
+- struct ip_options *opt = ip_options_get_alloc(optlen);
++ struct ip_options_rcu *opt = ip_options_get_alloc(optlen);
+
+ if (!opt)
+ return -ENOMEM;
+- if (optlen && copy_from_user(opt->__data, data, optlen)) {
++ if (optlen && copy_from_user(opt->opt.__data, data, optlen)) {
+ kfree(opt);
+ return -EFAULT;
+ }
+ return ip_options_get_finish(net, optp, opt, optlen);
+ }
+
+-int ip_options_get(struct net *net, struct ip_options **optp,
++int ip_options_get(struct net *net, struct ip_options_rcu **optp,
+ unsigned char *data, int optlen)
+ {
+- struct ip_options *opt = ip_options_get_alloc(optlen);
++ struct ip_options_rcu *opt = ip_options_get_alloc(optlen);
+
+ if (!opt)
+ return -ENOMEM;
+ if (optlen)
+- memcpy(opt->__data, data, optlen);
++ memcpy(opt->opt.__data, data, optlen);
+ return ip_options_get_finish(net, optp, opt, optlen);
+ }
+
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index 44b7910..7dde039 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -137,14 +137,14 @@ static inline int ip_select_ttl(struct inet_sock *inet, struct dst_entry *dst)
+ *
+ */
+ int ip_build_and_send_pkt(struct sk_buff *skb, struct sock *sk,
+- __be32 saddr, __be32 daddr, struct ip_options *opt)
++ __be32 saddr, __be32 daddr, struct ip_options_rcu *opt)
+ {
+ struct inet_sock *inet = inet_sk(sk);
+ struct rtable *rt = skb_rtable(skb);
+ struct iphdr *iph;
+
+ /* Build the IP header. */
+- skb_push(skb, sizeof(struct iphdr) + (opt ? opt->optlen : 0));
++ skb_push(skb, sizeof(struct iphdr) + (opt ? opt->opt.optlen : 0));
+ skb_reset_network_header(skb);
+ iph = ip_hdr(skb);
+ iph->version = 4;
+@@ -160,9 +160,9 @@ int ip_build_and_send_pkt(struct sk_buff *skb, struct sock *sk,
+ iph->protocol = sk->sk_protocol;
+ ip_select_ident(iph, &rt->u.dst, sk);
+
+- if (opt && opt->optlen) {
+- iph->ihl += opt->optlen>>2;
+- ip_options_build(skb, opt, daddr, rt, 0);
++ if (opt && opt->opt.optlen) {
++ iph->ihl += opt->opt.optlen>>2;
++ ip_options_build(skb, &opt->opt, daddr, rt, 0);
+ }
+
+ skb->priority = sk->sk_priority;
+@@ -312,9 +312,10 @@ int ip_queue_xmit(struct sk_buff *skb, int ipfragok)
+ {
+ struct sock *sk = skb->sk;
+ struct inet_sock *inet = inet_sk(sk);
+- struct ip_options *opt = inet->opt;
++ struct ip_options_rcu *inet_opt = NULL;
+ struct rtable *rt;
+ struct iphdr *iph;
++ int res;
+
+ /* Skip all of this if the packet is already routed,
+ * f.e. by something like SCTP.
+@@ -325,13 +326,15 @@ int ip_queue_xmit(struct sk_buff *skb, int ipfragok)
+
+ /* Make sure we can route this packet. */
+ rt = (struct rtable *)__sk_dst_check(sk, 0);
++ rcu_read_lock();
++ inet_opt = rcu_dereference(inet->inet_opt);
+ if (rt == NULL) {
+ __be32 daddr;
+
+ /* Use correct destination address if we have options. */
+ daddr = inet->daddr;
+- if(opt && opt->srr)
+- daddr = opt->faddr;
++ if (inet_opt && inet_opt->opt.srr)
++ daddr = inet_opt->opt.faddr;
+
+ {
+ struct flowi fl = { .oif = sk->sk_bound_dev_if,
+@@ -359,11 +362,11 @@ int ip_queue_xmit(struct sk_buff *skb, int ipfragok)
+ skb_dst_set(skb, dst_clone(&rt->u.dst));
+
+ packet_routed:
+- if (opt && opt->is_strictroute && rt->rt_dst != rt->rt_gateway)
++ if (inet_opt && inet_opt->opt.is_strictroute && rt->rt_dst != rt->rt_gateway)
+ goto no_route;
+
+ /* OK, we know where to send it, allocate and build IP header. */
+- skb_push(skb, sizeof(struct iphdr) + (opt ? opt->optlen : 0));
++ skb_push(skb, sizeof(struct iphdr) + (inet_opt ? inet_opt->opt.optlen : 0));
+ skb_reset_network_header(skb);
+ iph = ip_hdr(skb);
+ *((__be16 *)iph) = htons((4 << 12) | (5 << 8) | (inet->tos & 0xff));
+@@ -377,9 +380,9 @@ packet_routed:
+ iph->daddr = rt->rt_dst;
+ /* Transport layer set skb->h.foo itself. */
+
+- if (opt && opt->optlen) {
+- iph->ihl += opt->optlen >> 2;
+- ip_options_build(skb, opt, inet->daddr, rt, 0);
++ if (inet_opt && inet_opt->opt.optlen) {
++ iph->ihl += inet_opt->opt.optlen >> 2;
++ ip_options_build(skb, &inet_opt->opt, inet->daddr, rt, 0);
+ }
+
+ ip_select_ident_more(iph, &rt->u.dst, sk,
+@@ -387,10 +390,12 @@ packet_routed:
+
+ skb->priority = sk->sk_priority;
+ skb->mark = sk->sk_mark;
+-
+- return ip_local_out(skb);
++ res = ip_local_out(skb);
++ rcu_read_unlock();
++ return res;
+
+ no_route:
++ rcu_read_unlock();
+ IP_INC_STATS(sock_net(sk), IPSTATS_MIB_OUTNOROUTES);
+ kfree_skb(skb);
+ return -EHOSTUNREACH;
+@@ -809,7 +814,7 @@ int ip_append_data(struct sock *sk,
+ /*
+ * setup for corking.
+ */
+- opt = ipc->opt;
++ opt = ipc->opt ? &ipc->opt->opt : NULL;
+ if (opt) {
+ if (inet->cork.opt == NULL) {
+ inet->cork.opt = kmalloc(sizeof(struct ip_options) + 40, sk->sk_allocation);
+@@ -1367,26 +1372,23 @@ void ip_send_reply(struct sock *sk, struct sk_buff *skb, struct ip_reply_arg *ar
+ unsigned int len)
+ {
+ struct inet_sock *inet = inet_sk(sk);
+- struct {
+- struct ip_options opt;
+- char data[40];
+- } replyopts;
++ struct ip_options_data replyopts;
+ struct ipcm_cookie ipc;
+ __be32 daddr;
+ struct rtable *rt = skb_rtable(skb);
+
+- if (ip_options_echo(&replyopts.opt, skb))
++ if (ip_options_echo(&replyopts.opt.opt, skb))
+ return;
+
+ daddr = ipc.addr = rt->rt_src;
+ ipc.opt = NULL;
+ ipc.shtx.flags = 0;
+
+- if (replyopts.opt.optlen) {
++ if (replyopts.opt.opt.optlen) {
+ ipc.opt = &replyopts.opt;
+
+- if (ipc.opt->srr)
+- daddr = replyopts.opt.faddr;
++ if (replyopts.opt.opt.srr)
++ daddr = replyopts.opt.opt.faddr;
+ }
+
+ {
+diff --git a/net/ipv4/ip_sockglue.c b/net/ipv4/ip_sockglue.c
+index e982b5c..099e6c3 100644
+--- a/net/ipv4/ip_sockglue.c
++++ b/net/ipv4/ip_sockglue.c
+@@ -434,6 +434,11 @@ out:
+ }
+
+
++static void opt_kfree_rcu(struct rcu_head *head)
++{
++ kfree(container_of(head, struct ip_options_rcu, rcu));
++}
++
+ /*
+ * Socket option code for IP. This is the end of the line after any
+ * TCP,UDP etc options on an IP socket.
+@@ -479,13 +484,15 @@ static int do_ip_setsockopt(struct sock *sk, int level,
+ switch (optname) {
+ case IP_OPTIONS:
+ {
+- struct ip_options *opt = NULL;
++ struct ip_options_rcu *old, *opt = NULL;
++
+ if (optlen > 40 || optlen < 0)
+ goto e_inval;
+ err = ip_options_get_from_user(sock_net(sk), &opt,
+ optval, optlen);
+ if (err)
+ break;
++ old = inet->inet_opt;
+ if (inet->is_icsk) {
+ struct inet_connection_sock *icsk = inet_csk(sk);
+ #if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+@@ -494,17 +501,18 @@ static int do_ip_setsockopt(struct sock *sk, int level,
+ (TCPF_LISTEN | TCPF_CLOSE)) &&
+ inet->daddr != LOOPBACK4_IPV6)) {
+ #endif
+- if (inet->opt)
+- icsk->icsk_ext_hdr_len -= inet->opt->optlen;
++ if (old)
++ icsk->icsk_ext_hdr_len -= old->opt.optlen;
+ if (opt)
+- icsk->icsk_ext_hdr_len += opt->optlen;
++ icsk->icsk_ext_hdr_len += opt->opt.optlen;
+ icsk->icsk_sync_mss(sk, icsk->icsk_pmtu_cookie);
+ #if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+ }
+ #endif
+ }
+- opt = xchg(&inet->opt, opt);
+- kfree(opt);
++ rcu_assign_pointer(inet->inet_opt, opt);
++ if (old)
++ call_rcu(&old->rcu, opt_kfree_rcu);
+ break;
+ }
+ case IP_PKTINFO:
+@@ -563,7 +571,7 @@ static int do_ip_setsockopt(struct sock *sk, int level,
+ case IP_TTL:
+ if (optlen < 1)
+ goto e_inval;
+- if (val != -1 && (val < 0 || val > 255))
++ if (val != -1 && (val < 1 || val > 255))
+ goto e_inval;
+ inet->uc_ttl = val;
+ break;
+@@ -1032,12 +1040,15 @@ static int do_ip_getsockopt(struct sock *sk, int level, int optname,
+ case IP_OPTIONS:
+ {
+ unsigned char optbuf[sizeof(struct ip_options)+40];
+- struct ip_options * opt = (struct ip_options *)optbuf;
++ struct ip_options *opt = (struct ip_options *)optbuf;
++ struct ip_options_rcu *inet_opt;
++
++ inet_opt = inet->inet_opt;
+ opt->optlen = 0;
+- if (inet->opt)
+- memcpy(optbuf, inet->opt,
+- sizeof(struct ip_options)+
+- inet->opt->optlen);
++ if (inet_opt)
++ memcpy(optbuf, &inet_opt->opt,
++ sizeof(struct ip_options) +
++ inet_opt->opt.optlen);
+ release_sock(sk);
+
+ if (opt->optlen == 0)
+diff --git a/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c b/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c
+index 1032a15..c6437d5 100644
+--- a/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c
++++ b/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c
+@@ -83,6 +83,14 @@ static int ipv4_get_l4proto(const struct sk_buff *skb, unsigned int nhoff,
+ *dataoff = nhoff + (iph->ihl << 2);
+ *protonum = iph->protocol;
+
++ /* Check bogus IP headers */
++ if (*dataoff > skb->len) {
++ pr_debug("nf_conntrack_ipv4: bogus IPv4 packet: "
++ "nhoff %u, ihl %u, skblen %u\n",
++ nhoff, iph->ihl << 2, skb->len);
++ return -NF_ACCEPT;
++ }
++
+ return NF_ACCEPT;
+ }
+
+diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c
+index ab996f9..07ab583 100644
+--- a/net/ipv4/raw.c
++++ b/net/ipv4/raw.c
+@@ -459,6 +459,7 @@ static int raw_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
+ __be32 saddr;
+ u8 tos;
+ int err;
++ struct ip_options_data opt_copy;
+
+ err = -EMSGSIZE;
+ if (len > 0xFFFF)
+@@ -519,8 +520,18 @@ static int raw_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
+ saddr = ipc.addr;
+ ipc.addr = daddr;
+
+- if (!ipc.opt)
+- ipc.opt = inet->opt;
++ if (!ipc.opt) {
++ struct ip_options_rcu *inet_opt;
++
++ rcu_read_lock();
++ inet_opt = rcu_dereference(inet->inet_opt);
++ if (inet_opt) {
++ memcpy(&opt_copy, inet_opt,
++ sizeof(*inet_opt) + inet_opt->opt.optlen);
++ ipc.opt = &opt_copy.opt;
++ }
++ rcu_read_unlock();
++ }
+
+ if (ipc.opt) {
+ err = -EINVAL;
+@@ -529,10 +540,10 @@ static int raw_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
+ */
+ if (inet->hdrincl)
+ goto done;
+- if (ipc.opt->srr) {
++ if (ipc.opt->opt.srr) {
+ if (!daddr)
+ goto done;
+- daddr = ipc.opt->faddr;
++ daddr = ipc.opt->opt.faddr;
+ }
+ }
+ tos = RT_CONN_FLAGS(sk);
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 58f141b..f16d19b 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -1412,7 +1412,7 @@ void ip_rt_redirect(__be32 old_gw, __be32 daddr, __be32 new_gw,
+ dev_hold(rt->u.dst.dev);
+ if (rt->idev)
+ in_dev_hold(rt->idev);
+- rt->u.dst.obsolete = 0;
++ rt->u.dst.obsolete = -1;
+ rt->u.dst.lastuse = jiffies;
+ rt->u.dst.path = &rt->u.dst;
+ rt->u.dst.neighbour = NULL;
+@@ -1477,7 +1477,7 @@ static struct dst_entry *ipv4_negative_advice(struct dst_entry *dst)
+ struct dst_entry *ret = dst;
+
+ if (rt) {
+- if (dst->obsolete) {
++ if (dst->obsolete > 0) {
+ ip_rt_put(rt);
+ ret = NULL;
+ } else if ((rt->rt_flags & RTCF_REDIRECTED) ||
+@@ -1700,7 +1700,9 @@ static void ip_rt_update_pmtu(struct dst_entry *dst, u32 mtu)
+
+ static struct dst_entry *ipv4_dst_check(struct dst_entry *dst, u32 cookie)
+ {
+- return NULL;
++ if (rt_is_expired((struct rtable *)dst))
++ return NULL;
++ return dst;
+ }
+
+ static void ipv4_dst_destroy(struct dst_entry *dst)
+@@ -1862,7 +1864,8 @@ static int ip_route_input_mc(struct sk_buff *skb, __be32 daddr, __be32 saddr,
+ if (!rth)
+ goto e_nobufs;
+
+- rth->u.dst.output= ip_rt_bug;
++ rth->u.dst.output = ip_rt_bug;
++ rth->u.dst.obsolete = -1;
+
+ atomic_set(&rth->u.dst.__refcnt, 1);
+ rth->u.dst.flags= DST_HOST;
+@@ -2023,6 +2026,7 @@ static int __mkroute_input(struct sk_buff *skb,
+ rth->fl.oif = 0;
+ rth->rt_spec_dst= spec_dst;
+
++ rth->u.dst.obsolete = -1;
+ rth->u.dst.input = ip_forward;
+ rth->u.dst.output = ip_output;
+ rth->rt_genid = rt_genid(dev_net(rth->u.dst.dev));
+@@ -2187,6 +2191,7 @@ local_input:
+ goto e_nobufs;
+
+ rth->u.dst.output= ip_rt_bug;
++ rth->u.dst.obsolete = -1;
+ rth->rt_genid = rt_genid(net);
+
+ atomic_set(&rth->u.dst.__refcnt, 1);
+@@ -2411,7 +2416,8 @@ static int __mkroute_output(struct rtable **result,
+ rth->rt_gateway = fl->fl4_dst;
+ rth->rt_spec_dst= fl->fl4_src;
+
+- rth->u.dst.output=ip_output;
++ rth->u.dst.output = ip_output;
++ rth->u.dst.obsolete = -1;
+ rth->rt_genid = rt_genid(dev_net(dev_out));
+
+ RT_CACHE_STAT_INC(out_slow_tot);
+@@ -2741,6 +2747,7 @@ static int ipv4_dst_blackhole(struct net *net, struct rtable **rp, struct flowi
+ if (rt) {
+ struct dst_entry *new = &rt->u.dst;
+
++ new->obsolete = -1;
+ atomic_set(&new->__refcnt, 1);
+ new->__use = 1;
+ new->input = dst_discard;
+diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
+index a6e0e07..0a94b64 100644
+--- a/net/ipv4/syncookies.c
++++ b/net/ipv4/syncookies.c
+@@ -309,10 +309,10 @@ struct sock *cookie_v4_check(struct sock *sk, struct sk_buff *skb,
+ * the ACK carries the same options again (see RFC1122 4.2.3.8)
+ */
+ if (opt && opt->optlen) {
+- int opt_size = sizeof(struct ip_options) + opt->optlen;
++ int opt_size = sizeof(struct ip_options_rcu) + opt->optlen;
+
+ ireq->opt = kmalloc(opt_size, GFP_ATOMIC);
+- if (ireq->opt != NULL && ip_options_echo(ireq->opt, skb)) {
++ if (ireq->opt != NULL && ip_options_echo(&ireq->opt->opt, skb)) {
+ kfree(ireq->opt);
+ ireq->opt = NULL;
+ }
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index b9644d8..6232462 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -847,7 +847,7 @@ wait_for_memory:
+ }
+
+ out:
+- if (copied)
++ if (copied && !(flags & MSG_SENDPAGE_NOTLAST))
+ tcp_push(sk, flags, mss_now, tp->nonagle);
+ return copied;
+
+diff --git a/net/ipv4/tcp_illinois.c b/net/ipv4/tcp_illinois.c
+index 1eba160..c35d91f 100644
+--- a/net/ipv4/tcp_illinois.c
++++ b/net/ipv4/tcp_illinois.c
+@@ -313,11 +313,13 @@ static void tcp_illinois_info(struct sock *sk, u32 ext,
+ .tcpv_rttcnt = ca->cnt_rtt,
+ .tcpv_minrtt = ca->base_rtt,
+ };
+- u64 t = ca->sum_rtt;
+
+- do_div(t, ca->cnt_rtt);
+- info.tcpv_rtt = t;
++ if (info.tcpv_rttcnt > 0) {
++ u64 t = ca->sum_rtt;
+
++ do_div(t, info.tcpv_rttcnt);
++ info.tcpv_rtt = t;
++ }
+ nla_put(skb, INET_DIAG_VEGASINFO, sizeof(info), &info);
+ }
+ }
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 6a4e832..d746d3b3 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -152,6 +152,7 @@ int tcp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ __be32 daddr, nexthop;
+ int tmp;
+ int err;
++ struct ip_options_rcu *inet_opt;
+
+ if (addr_len < sizeof(struct sockaddr_in))
+ return -EINVAL;
+@@ -160,10 +161,11 @@ int tcp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ return -EAFNOSUPPORT;
+
+ nexthop = daddr = usin->sin_addr.s_addr;
+- if (inet->opt && inet->opt->srr) {
++ inet_opt = inet->inet_opt;
++ if (inet_opt && inet_opt->opt.srr) {
+ if (!daddr)
+ return -EINVAL;
+- nexthop = inet->opt->faddr;
++ nexthop = inet_opt->opt.faddr;
+ }
+
+ tmp = ip_route_connect(&rt, nexthop, inet->saddr,
+@@ -181,7 +183,7 @@ int tcp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ return -ENETUNREACH;
+ }
+
+- if (!inet->opt || !inet->opt->srr)
++ if (!inet_opt || !inet_opt->opt.srr)
+ daddr = rt->rt_dst;
+
+ if (!inet->saddr)
+@@ -215,8 +217,8 @@ int tcp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ inet->daddr = daddr;
+
+ inet_csk(sk)->icsk_ext_hdr_len = 0;
+- if (inet->opt)
+- inet_csk(sk)->icsk_ext_hdr_len = inet->opt->optlen;
++ if (inet_opt)
++ inet_csk(sk)->icsk_ext_hdr_len = inet_opt->opt.optlen;
+
+ tp->rx_opt.mss_clamp = 536;
+
+@@ -802,17 +804,18 @@ static void syn_flood_warning(struct sk_buff *skb)
+ /*
+ * Save and compile IPv4 options into the request_sock if needed.
+ */
+-static struct ip_options *tcp_v4_save_options(struct sock *sk,
+- struct sk_buff *skb)
++static struct ip_options_rcu *tcp_v4_save_options(struct sock *sk,
++ struct sk_buff *skb)
+ {
+- struct ip_options *opt = &(IPCB(skb)->opt);
+- struct ip_options *dopt = NULL;
++ const struct ip_options *opt = &(IPCB(skb)->opt);
++ struct ip_options_rcu *dopt = NULL;
+
+ if (opt && opt->optlen) {
+- int opt_size = optlength(opt);
++ int opt_size = sizeof(*dopt) + opt->optlen;
++
+ dopt = kmalloc(opt_size, GFP_ATOMIC);
+ if (dopt) {
+- if (ip_options_echo(dopt, skb)) {
++ if (ip_options_echo(&dopt->opt, skb)) {
+ kfree(dopt);
+ dopt = NULL;
+ }
+@@ -1362,6 +1365,7 @@ struct sock *tcp_v4_syn_recv_sock(struct sock *sk, struct sk_buff *skb,
+ #ifdef CONFIG_TCP_MD5SIG
+ struct tcp_md5sig_key *key;
+ #endif
++ struct ip_options_rcu *inet_opt;
+
+ if (sk_acceptq_is_full(sk))
+ goto exit_overflow;
+@@ -1382,13 +1386,14 @@ struct sock *tcp_v4_syn_recv_sock(struct sock *sk, struct sk_buff *skb,
+ newinet->daddr = ireq->rmt_addr;
+ newinet->rcv_saddr = ireq->loc_addr;
+ newinet->saddr = ireq->loc_addr;
+- newinet->opt = ireq->opt;
++ inet_opt = ireq->opt;
++ rcu_assign_pointer(newinet->inet_opt, inet_opt);
+ ireq->opt = NULL;
+ newinet->mc_index = inet_iif(skb);
+ newinet->mc_ttl = ip_hdr(skb)->ttl;
+ inet_csk(newsk)->icsk_ext_hdr_len = 0;
+- if (newinet->opt)
+- inet_csk(newsk)->icsk_ext_hdr_len = newinet->opt->optlen;
++ if (inet_opt)
++ inet_csk(newsk)->icsk_ext_hdr_len = inet_opt->opt.optlen;
+ newinet->id = newtp->write_seq ^ jiffies;
+
+ tcp_mtup_init(newsk);
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index af83bdf..38a23e4 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -1391,8 +1391,11 @@ static int tcp_tso_should_defer(struct sock *sk, struct sk_buff *skb)
+ goto send_now;
+ }
+
+- /* Ok, it looks like it is advisable to defer. */
+- tp->tso_deferred = 1 | (jiffies << 1);
++ /* Ok, it looks like it is advisable to defer.
++ * Do not rearm the timer if already set to not break TCP ACK clocking.
++ */
++ if (!tp->tso_deferred)
++ tp->tso_deferred = 1 | (jiffies << 1);
+
+ return 1;
+
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 8e28770..af559e0 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -592,6 +592,7 @@ int udp_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
+ int err, is_udplite = IS_UDPLITE(sk);
+ int corkreq = up->corkflag || msg->msg_flags&MSG_MORE;
+ int (*getfrag)(void *, char *, int, int, int, struct sk_buff *);
++ struct ip_options_data opt_copy;
+
+ if (len > 0xFFFF)
+ return -EMSGSIZE;
+@@ -663,22 +664,32 @@ int udp_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
+ free = 1;
+ connected = 0;
+ }
+- if (!ipc.opt)
+- ipc.opt = inet->opt;
++ if (!ipc.opt) {
++ struct ip_options_rcu *inet_opt;
++
++ rcu_read_lock();
++ inet_opt = rcu_dereference(inet->inet_opt);
++ if (inet_opt) {
++ memcpy(&opt_copy, inet_opt,
++ sizeof(*inet_opt) + inet_opt->opt.optlen);
++ ipc.opt = &opt_copy.opt;
++ }
++ rcu_read_unlock();
++ }
+
+ saddr = ipc.addr;
+ ipc.addr = faddr = daddr;
+
+- if (ipc.opt && ipc.opt->srr) {
++ if (ipc.opt && ipc.opt->opt.srr) {
+ if (!daddr)
+ return -EINVAL;
+- faddr = ipc.opt->faddr;
++ faddr = ipc.opt->opt.faddr;
+ connected = 0;
+ }
+ tos = RT_TOS(inet->tos);
+ if (sock_flag(sk, SOCK_LOCALROUTE) ||
+ (msg->msg_flags & MSG_DONTROUTE) ||
+- (ipc.opt && ipc.opt->is_strictroute)) {
++ (ipc.opt && ipc.opt->opt.is_strictroute)) {
+ tos |= RTO_ONLINK;
+ connected = 0;
+ }
+diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
+index e127a32..835590d 100644
+--- a/net/ipv6/af_inet6.c
++++ b/net/ipv6/af_inet6.c
+@@ -1073,6 +1073,8 @@ static int __init inet6_init(void)
+ goto out;
+ }
+
++ initialize_hashidentrnd();
++
+ err = proto_register(&tcpv6_prot, 1);
+ if (err)
+ goto out;
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 9ad5792..6ba0fe2 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -604,6 +604,35 @@ int ip6_find_1stfragopt(struct sk_buff *skb, u8 **nexthdr)
+ return offset;
+ }
+
++static u32 hashidentrnd __read_mostly;
++#define FID_HASH_SZ 16
++static u32 ipv6_fragmentation_id[FID_HASH_SZ];
++
++void __init initialize_hashidentrnd(void)
++{
++ get_random_bytes(&hashidentrnd, sizeof(hashidentrnd));
++}
++
++static u32 __ipv6_select_ident(const struct in6_addr *addr)
++{
++ u32 newid, oldid, hash = jhash2((u32 *)addr, 4, hashidentrnd);
++ u32 *pid = &ipv6_fragmentation_id[hash % FID_HASH_SZ];
++
++ do {
++ oldid = *pid;
++ newid = oldid + 1;
++ if (!(hash + newid))
++ newid++;
++ } while (cmpxchg(pid, oldid, newid) != oldid);
++
++ return hash + newid;
++}
++
++void ipv6_select_ident(struct frag_hdr *fhdr, struct rt6_info *rt)
++{
++ fhdr->identification = htonl(__ipv6_select_ident(&rt->rt6i_dst.addr));
++}
++
+ static int ip6_fragment(struct sk_buff *skb, int (*output)(struct sk_buff *))
+ {
+ struct sk_buff *frag;
+@@ -689,7 +718,7 @@ static int ip6_fragment(struct sk_buff *skb, int (*output)(struct sk_buff *))
+ skb_reset_network_header(skb);
+ memcpy(skb_network_header(skb), tmp_hdr, hlen);
+
+- ipv6_select_ident(fh);
++ ipv6_select_ident(fh, rt);
+ fh->nexthdr = nexthdr;
+ fh->reserved = 0;
+ fh->frag_off = htons(IP6_MF);
+@@ -835,7 +864,7 @@ slow_path:
+ fh->nexthdr = nexthdr;
+ fh->reserved = 0;
+ if (!frag_id) {
+- ipv6_select_ident(fh);
++ ipv6_select_ident(fh, rt);
+ frag_id = fh->identification;
+ } else
+ fh->identification = frag_id;
+@@ -1039,7 +1068,8 @@ static inline int ip6_ufo_append_data(struct sock *sk,
+ int getfrag(void *from, char *to, int offset, int len,
+ int odd, struct sk_buff *skb),
+ void *from, int length, int hh_len, int fragheaderlen,
+- int transhdrlen, int mtu,unsigned int flags)
++ int transhdrlen, int mtu,unsigned int flags,
++ struct rt6_info *rt)
+
+ {
+ struct sk_buff *skb;
+@@ -1084,7 +1114,7 @@ static inline int ip6_ufo_append_data(struct sock *sk,
+ skb_shinfo(skb)->gso_size = (mtu - fragheaderlen -
+ sizeof(struct frag_hdr)) & ~7;
+ skb_shinfo(skb)->gso_type = SKB_GSO_UDP;
+- ipv6_select_ident(&fhdr);
++ ipv6_select_ident(&fhdr, rt);
+ skb_shinfo(skb)->ip6_frag_id = fhdr.identification;
+ __skb_queue_tail(&sk->sk_write_queue, skb);
+
+@@ -1233,7 +1263,7 @@ int ip6_append_data(struct sock *sk, int getfrag(void *from, char *to,
+
+ err = ip6_ufo_append_data(sk, getfrag, from, length, hh_len,
+ fragheaderlen, transhdrlen, mtu,
+- flags);
++ flags, rt);
+ if (err)
+ goto error;
+ return 0;
+diff --git a/net/ipv6/reassembly.c b/net/ipv6/reassembly.c
+index 4d18699..105de22 100644
+--- a/net/ipv6/reassembly.c
++++ b/net/ipv6/reassembly.c
+@@ -148,16 +148,6 @@ int ip6_frag_match(struct inet_frag_queue *q, void *a)
+ }
+ EXPORT_SYMBOL(ip6_frag_match);
+
+-/* Memory Tracking Functions. */
+-static inline void frag_kfree_skb(struct netns_frags *nf,
+- struct sk_buff *skb, int *work)
+-{
+- if (work)
+- *work -= skb->truesize;
+- atomic_sub(skb->truesize, &nf->mem);
+- kfree_skb(skb);
+-}
+-
+ void ip6_frag_init(struct inet_frag_queue *q, void *a)
+ {
+ struct frag_queue *fq = container_of(q, struct frag_queue, q);
+@@ -348,58 +338,22 @@ static int ip6_frag_queue(struct frag_queue *fq, struct sk_buff *skb,
+ prev = next;
+ }
+
+- /* We found where to put this one. Check for overlap with
+- * preceding fragment, and, if needed, align things so that
+- * any overlaps are eliminated.
++ /* RFC5722, Section 4:
++ * When reassembling an IPv6 datagram, if
++ * one or more its constituent fragments is determined to be an
++ * overlapping fragment, the entire datagram (and any constituent
++ * fragments, including those not yet received) MUST be silently
++ * discarded.
+ */
+- if (prev) {
+- int i = (FRAG6_CB(prev)->offset + prev->len) - offset;
+
+- if (i > 0) {
+- offset += i;
+- if (end <= offset)
+- goto err;
+- if (!pskb_pull(skb, i))
+- goto err;
+- if (skb->ip_summed != CHECKSUM_UNNECESSARY)
+- skb->ip_summed = CHECKSUM_NONE;
+- }
+- }
++ /* Check for overlap with preceding fragment. */
++ if (prev &&
++ (FRAG6_CB(prev)->offset + prev->len) - offset > 0)
++ goto discard_fq;
+
+- /* Look for overlap with succeeding segments.
+- * If we can merge fragments, do it.
+- */
+- while (next && FRAG6_CB(next)->offset < end) {
+- int i = end - FRAG6_CB(next)->offset; /* overlap is 'i' bytes */
+-
+- if (i < next->len) {
+- /* Eat head of the next overlapped fragment
+- * and leave the loop. The next ones cannot overlap.
+- */
+- if (!pskb_pull(next, i))
+- goto err;
+- FRAG6_CB(next)->offset += i; /* next fragment */
+- fq->q.meat -= i;
+- if (next->ip_summed != CHECKSUM_UNNECESSARY)
+- next->ip_summed = CHECKSUM_NONE;
+- break;
+- } else {
+- struct sk_buff *free_it = next;
+-
+- /* Old fragment is completely overridden with
+- * new one drop it.
+- */
+- next = next->next;
+-
+- if (prev)
+- prev->next = next;
+- else
+- fq->q.fragments = next;
+-
+- fq->q.meat -= free_it->len;
+- frag_kfree_skb(fq->q.net, free_it, NULL);
+- }
+- }
++ /* Look for overlap with succeeding segment. */
++ if (next && FRAG6_CB(next)->offset < end)
++ goto discard_fq;
+
+ FRAG6_CB(skb)->offset = offset;
+
+@@ -436,6 +390,8 @@ static int ip6_frag_queue(struct frag_queue *fq, struct sk_buff *skb,
+ write_unlock(&ip6_frags.lock);
+ return -1;
+
++discard_fq:
++ fq_kill(fq);
+ err:
+ IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
+ IPSTATS_MIB_REASMFAILS);
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index faae6df..1b25191 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -1391,7 +1391,7 @@ static struct sock * tcp_v6_syn_recv_sock(struct sock *sk, struct sk_buff *skb,
+
+ First: no IPv4 options.
+ */
+- newinet->opt = NULL;
++ newinet->inet_opt = NULL;
+ newnp->ipv6_fl_list = NULL;
+
+ /* Clone RX bits */
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 9cc6289..d8c0374 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -1162,7 +1162,7 @@ static struct sk_buff *udp6_ufo_fragment(struct sk_buff *skb, int features)
+ fptr = (struct frag_hdr *)(skb_network_header(skb) + unfrag_ip6hlen);
+ fptr->nexthdr = nexthdr;
+ fptr->reserved = 0;
+- ipv6_select_ident(fptr);
++ ipv6_select_ident(fptr, (struct rt6_info *)skb_dst(skb));
+
+ /* Fragment the skb. ipv6 header and the remaining fields of the
+ * fragment header are updated in ipv6_gso_segment()
+diff --git a/net/irda/af_irda.c b/net/irda/af_irda.c
+index 476b24e..bfb325d 100644
+--- a/net/irda/af_irda.c
++++ b/net/irda/af_irda.c
+@@ -1338,6 +1338,8 @@ static int irda_recvmsg_dgram(struct kiocb *iocb, struct socket *sock,
+ if ((err = sock_error(sk)) < 0)
+ return err;
+
++ msg->msg_namelen = 0;
++
+ skb = skb_recv_datagram(sk, flags & ~MSG_DONTWAIT,
+ flags & MSG_DONTWAIT, &err);
+ if (!skb)
+diff --git a/net/iucv/af_iucv.c b/net/iucv/af_iucv.c
+index bada1b9..f605b23 100644
+--- a/net/iucv/af_iucv.c
++++ b/net/iucv/af_iucv.c
+@@ -1160,6 +1160,8 @@ static int iucv_sock_recvmsg(struct kiocb *iocb, struct socket *sock,
+ struct sk_buff *skb, *rskb, *cskb;
+ int err = 0;
+
++ msg->msg_namelen = 0;
++
+ if ((sk->sk_state == IUCV_DISCONN || sk->sk_state == IUCV_SEVERED) &&
+ skb_queue_empty(&iucv->backlog_skb_q) &&
+ skb_queue_empty(&sk->sk_receive_queue) &&
+diff --git a/net/llc/af_llc.c b/net/llc/af_llc.c
+index 2da8d14..8a814a5 100644
+--- a/net/llc/af_llc.c
++++ b/net/llc/af_llc.c
+@@ -674,6 +674,8 @@ static int llc_ui_recvmsg(struct kiocb *iocb, struct socket *sock,
+ int target; /* Read at least this many bytes */
+ long timeo;
+
++ msg->msg_namelen = 0;
++
+ lock_sock(sk);
+ copied = -ENOTCONN;
+ if (unlikely(sk->sk_type == SOCK_STREAM && sk->sk_state == TCP_LISTEN))
+@@ -912,14 +914,13 @@ static int llc_ui_getname(struct socket *sock, struct sockaddr *uaddr,
+ struct sockaddr_llc sllc;
+ struct sock *sk = sock->sk;
+ struct llc_sock *llc = llc_sk(sk);
+- int rc = 0;
++ int rc = -EBADF;
+
+ memset(&sllc, 0, sizeof(sllc));
+ lock_sock(sk);
+ if (sock_flag(sk, SOCK_ZAPPED))
+ goto out;
+ *uaddrlen = sizeof(sllc);
+- memset(uaddr, 0, *uaddrlen);
+ if (peer) {
+ rc = -ENOTCONN;
+ if (sk->sk_state != TCP_ESTABLISHED)
+diff --git a/net/netfilter/ipvs/ip_vs_ctl.c b/net/netfilter/ipvs/ip_vs_ctl.c
+index 02b2610..9bcd972 100644
+--- a/net/netfilter/ipvs/ip_vs_ctl.c
++++ b/net/netfilter/ipvs/ip_vs_ctl.c
+@@ -2455,6 +2455,7 @@ do_ip_vs_get_ctl(struct sock *sk, int cmd, void __user *user, int *len)
+ {
+ struct ip_vs_timeout_user t;
+
++ memset(&t, 0, sizeof(t));
+ __ip_vs_get_timeouts(&t);
+ if (copy_to_user(user, &t, sizeof(t)) != 0)
+ ret = -EFAULT;
+diff --git a/net/netfilter/ipvs/ip_vs_xmit.c b/net/netfilter/ipvs/ip_vs_xmit.c
+index 30b3189..5be9140 100644
+--- a/net/netfilter/ipvs/ip_vs_xmit.c
++++ b/net/netfilter/ipvs/ip_vs_xmit.c
+@@ -64,6 +64,15 @@ __ip_vs_dst_check(struct ip_vs_dest *dest, u32 rtos, u32 cookie)
+ return dst;
+ }
+
++static inline bool
++__mtu_check_toobig_v6(const struct sk_buff *skb, u32 mtu)
++{
++ if (skb->len > mtu && !skb_is_gso(skb)) {
++ return true; /* Packet size violate MTU size */
++ }
++ return false;
++}
++
+ static struct rtable *
+ __ip_vs_get_out_rt(struct ip_vs_conn *cp, u32 rtos)
+ {
+@@ -245,7 +254,8 @@ ip_vs_bypass_xmit(struct sk_buff *skb, struct ip_vs_conn *cp,
+
+ /* MTU checking */
+ mtu = dst_mtu(&rt->u.dst);
+- if ((skb->len > mtu) && (iph->frag_off & htons(IP_DF))) {
++ if ((skb->len > mtu) && (iph->frag_off & htons(IP_DF)) &&
++ !skb_is_gso(skb)) {
+ ip_rt_put(rt);
+ icmp_send(skb, ICMP_DEST_UNREACH,ICMP_FRAG_NEEDED, htonl(mtu));
+ IP_VS_DBG_RL("%s(): frag needed\n", __func__);
+@@ -309,7 +319,7 @@ ip_vs_bypass_xmit_v6(struct sk_buff *skb, struct ip_vs_conn *cp,
+
+ /* MTU checking */
+ mtu = dst_mtu(&rt->u.dst);
+- if (skb->len > mtu) {
++ if (__mtu_check_toobig_v6(skb, mtu)) {
+ dst_release(&rt->u.dst);
+ icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu, skb->dev);
+ IP_VS_DBG_RL("%s(): frag needed\n", __func__);
+@@ -376,7 +386,7 @@ ip_vs_nat_xmit(struct sk_buff *skb, struct ip_vs_conn *cp,
+
+ /* MTU checking */
+ mtu = dst_mtu(&rt->u.dst);
+- if ((skb->len > mtu) && (iph->frag_off & htons(IP_DF))) {
++ if ((skb->len > mtu) && (iph->frag_off & htons(IP_DF)) && !skb_is_gso(skb)) {
+ ip_rt_put(rt);
+ icmp_send(skb, ICMP_DEST_UNREACH,ICMP_FRAG_NEEDED, htonl(mtu));
+ IP_VS_DBG_RL_PKT(0, pp, skb, 0, "ip_vs_nat_xmit(): frag needed for");
+@@ -452,7 +462,7 @@ ip_vs_nat_xmit_v6(struct sk_buff *skb, struct ip_vs_conn *cp,
+
+ /* MTU checking */
+ mtu = dst_mtu(&rt->u.dst);
+- if (skb->len > mtu) {
++ if (__mtu_check_toobig_v6(skb, mtu)) {
+ dst_release(&rt->u.dst);
+ icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu, skb->dev);
+ IP_VS_DBG_RL_PKT(0, pp, skb, 0,
+@@ -561,8 +571,8 @@ ip_vs_tunnel_xmit(struct sk_buff *skb, struct ip_vs_conn *cp,
+
+ df |= (old_iph->frag_off & htons(IP_DF));
+
+- if ((old_iph->frag_off & htons(IP_DF))
+- && mtu < ntohs(old_iph->tot_len)) {
++ if ((old_iph->frag_off & htons(IP_DF) &&
++ mtu < ntohs(old_iph->tot_len) && !skb_is_gso(skb))) {
+ icmp_send(skb, ICMP_DEST_UNREACH,ICMP_FRAG_NEEDED, htonl(mtu));
+ ip_rt_put(rt);
+ IP_VS_DBG_RL("%s(): frag needed\n", __func__);
+@@ -671,7 +681,8 @@ ip_vs_tunnel_xmit_v6(struct sk_buff *skb, struct ip_vs_conn *cp,
+ if (skb_dst(skb))
+ skb_dst(skb)->ops->update_pmtu(skb_dst(skb), mtu);
+
+- if (mtu < ntohs(old_iph->payload_len) + sizeof(struct ipv6hdr)) {
++ /* MTU checking: Notice that 'mtu' have been adjusted before hand */
++ if (__mtu_check_toobig_v6(skb, mtu)) {
+ icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu, skb->dev);
+ dst_release(&rt->u.dst);
+ IP_VS_DBG_RL("%s(): frag needed\n", __func__);
+@@ -760,7 +771,7 @@ ip_vs_dr_xmit(struct sk_buff *skb, struct ip_vs_conn *cp,
+
+ /* MTU checking */
+ mtu = dst_mtu(&rt->u.dst);
+- if ((iph->frag_off & htons(IP_DF)) && skb->len > mtu) {
++ if ((iph->frag_off & htons(IP_DF)) && skb->len > mtu && !skb_is_gso(skb)) {
+ icmp_send(skb, ICMP_DEST_UNREACH,ICMP_FRAG_NEEDED, htonl(mtu));
+ ip_rt_put(rt);
+ IP_VS_DBG_RL("%s(): frag needed\n", __func__);
+@@ -813,7 +824,7 @@ ip_vs_dr_xmit_v6(struct sk_buff *skb, struct ip_vs_conn *cp,
+
+ /* MTU checking */
+ mtu = dst_mtu(&rt->u.dst);
+- if (skb->len > mtu) {
++ if (__mtu_check_toobig_v6(skb, mtu)) {
+ icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu, skb->dev);
+ dst_release(&rt->u.dst);
+ IP_VS_DBG_RL("%s(): frag needed\n", __func__);
+@@ -888,7 +899,7 @@ ip_vs_icmp_xmit(struct sk_buff *skb, struct ip_vs_conn *cp,
+
+ /* MTU checking */
+ mtu = dst_mtu(&rt->u.dst);
+- if ((skb->len > mtu) && (ip_hdr(skb)->frag_off & htons(IP_DF))) {
++ if ((skb->len > mtu) && (ip_hdr(skb)->frag_off & htons(IP_DF)) && !skb_is_gso(skb)) {
+ ip_rt_put(rt);
+ icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, htonl(mtu));
+ IP_VS_DBG_RL("%s(): frag needed\n", __func__);
+@@ -963,7 +974,7 @@ ip_vs_icmp_xmit_v6(struct sk_buff *skb, struct ip_vs_conn *cp,
+
+ /* MTU checking */
+ mtu = dst_mtu(&rt->u.dst);
+- if (skb->len > mtu) {
++ if (__mtu_check_toobig_v6(skb, mtu)) {
+ dst_release(&rt->u.dst);
+ icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu, skb->dev);
+ IP_VS_DBG_RL("%s(): frag needed\n", __func__);
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 35cfa79..728c080 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -828,7 +828,6 @@ static void tpacket_destruct_skb(struct sk_buff *skb)
+
+ if (likely(po->tx_ring.pg_vec)) {
+ ph = skb_shinfo(skb)->destructor_arg;
+- BUG_ON(__packet_get_status(po, ph) != TP_STATUS_SENDING);
+ BUG_ON(atomic_read(&po->tx_ring.pending) == 0);
+ atomic_dec(&po->tx_ring.pending);
+ __packet_set_status(po, ph, TP_STATUS_AVAILABLE);
+diff --git a/net/rds/recv.c b/net/rds/recv.c
+index 6a2654a..c45a881c 100644
+--- a/net/rds/recv.c
++++ b/net/rds/recv.c
+@@ -410,6 +410,8 @@ int rds_recvmsg(struct kiocb *iocb, struct socket *sock, struct msghdr *msg,
+
+ rdsdebug("size %zu flags 0x%x timeo %ld\n", size, msg_flags, timeo);
+
++ msg->msg_namelen = 0;
++
+ if (msg_flags & MSG_OOB)
+ goto out;
+
+@@ -486,6 +488,7 @@ int rds_recvmsg(struct kiocb *iocb, struct socket *sock, struct msghdr *msg,
+ sin->sin_port = inc->i_hdr.h_sport;
+ sin->sin_addr.s_addr = inc->i_saddr;
+ memset(sin->sin_zero, 0, sizeof(sin->sin_zero));
++ msg->msg_namelen = sizeof(*sin);
+ }
+ break;
+ }
+diff --git a/net/rose/af_rose.c b/net/rose/af_rose.c
+index 523efbb..2984999 100644
+--- a/net/rose/af_rose.c
++++ b/net/rose/af_rose.c
+@@ -1275,6 +1275,7 @@ static int rose_recvmsg(struct kiocb *iocb, struct socket *sock,
+ skb_copy_datagram_iovec(skb, 0, msg->msg_iov, copied);
+
+ if (srose != NULL) {
++ memset(srose, 0, msg->msg_namelen);
+ srose->srose_family = AF_ROSE;
+ srose->srose_addr = rose->dest_addr;
+ srose->srose_call = rose->dest_call;
+diff --git a/net/sched/act_gact.c b/net/sched/act_gact.c
+index f9fc6ec..faebd8a 100644
+--- a/net/sched/act_gact.c
++++ b/net/sched/act_gact.c
+@@ -67,6 +67,9 @@ static int tcf_gact_init(struct nlattr *nla, struct nlattr *est,
+ struct tcf_common *pc;
+ int ret = 0;
+ int err;
++#ifdef CONFIG_GACT_PROB
++ struct tc_gact_p *p_parm = NULL;
++#endif
+
+ if (nla == NULL)
+ return -EINVAL;
+@@ -82,6 +85,12 @@ static int tcf_gact_init(struct nlattr *nla, struct nlattr *est,
+ #ifndef CONFIG_GACT_PROB
+ if (tb[TCA_GACT_PROB] != NULL)
+ return -EOPNOTSUPP;
++#else
++ if (tb[TCA_GACT_PROB]) {
++ p_parm = nla_data(tb[TCA_GACT_PROB]);
++ if (p_parm->ptype >= MAX_RAND)
++ return -EINVAL;
++ }
+ #endif
+
+ pc = tcf_hash_check(parm->index, a, bind, &gact_hash_info);
+@@ -103,8 +112,7 @@ static int tcf_gact_init(struct nlattr *nla, struct nlattr *est,
+ spin_lock_bh(&gact->tcf_lock);
+ gact->tcf_action = parm->action;
+ #ifdef CONFIG_GACT_PROB
+- if (tb[TCA_GACT_PROB] != NULL) {
+- struct tc_gact_p *p_parm = nla_data(tb[TCA_GACT_PROB]);
++ if (p_parm) {
+ gact->tcfg_paction = p_parm->paction;
+ gact->tcfg_pval = p_parm->pval;
+ gact->tcfg_ptype = p_parm->ptype;
+@@ -132,7 +140,7 @@ static int tcf_gact(struct sk_buff *skb, struct tc_action *a, struct tcf_result
+
+ spin_lock(&gact->tcf_lock);
+ #ifdef CONFIG_GACT_PROB
+- if (gact->tcfg_ptype && gact_rand[gact->tcfg_ptype] != NULL)
++ if (gact->tcfg_ptype)
+ action = gact_rand[gact->tcfg_ptype](gact);
+ else
+ action = gact->tcf_action;
+diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c
+index 85acab9..2f074d6 100644
+--- a/net/sched/sch_htb.c
++++ b/net/sched/sch_htb.c
+@@ -865,7 +865,7 @@ static struct sk_buff *htb_dequeue(struct Qdisc *sch)
+ q->now = psched_get_time();
+ start_at = jiffies;
+
+- next_event = q->now + 5 * PSCHED_TICKS_PER_SEC;
++ next_event = q->now + 5LLU * PSCHED_TICKS_PER_SEC;
+
+ for (level = 0; level < TC_HTB_MAXDEPTH; level++) {
+ /* common case optimization - skip event handler quickly */
+diff --git a/net/sctp/auth.c b/net/sctp/auth.c
+index 914c419..7363b9f 100644
+--- a/net/sctp/auth.c
++++ b/net/sctp/auth.c
+@@ -70,7 +70,7 @@ void sctp_auth_key_put(struct sctp_auth_bytes *key)
+ return;
+
+ if (atomic_dec_and_test(&key->refcnt)) {
+- kfree(key);
++ kzfree(key);
+ SCTP_DBG_OBJCNT_DEC(keys);
+ }
+ }
+diff --git a/net/sctp/chunk.c b/net/sctp/chunk.c
+index acf7c4d..b29621d 100644
+--- a/net/sctp/chunk.c
++++ b/net/sctp/chunk.c
+@@ -272,7 +272,7 @@ struct sctp_datamsg *sctp_datamsg_from_user(struct sctp_association *asoc,
+ goto errout;
+ err = sctp_user_addto_chunk(chunk, offset, len, msgh->msg_iov);
+ if (err < 0)
+- goto errout;
++ goto errout_chunk_free;
+
+ offset += len;
+
+@@ -308,7 +308,7 @@ struct sctp_datamsg *sctp_datamsg_from_user(struct sctp_association *asoc,
+ __skb_pull(chunk->skb, (__u8 *)chunk->chunk_hdr
+ - (__u8 *)chunk->skb->data);
+ if (err < 0)
+- goto errout;
++ goto errout_chunk_free;
+
+ sctp_datamsg_assign(msg, chunk);
+ list_add_tail(&chunk->frag_list, &msg->chunks);
+@@ -316,6 +316,9 @@ struct sctp_datamsg *sctp_datamsg_from_user(struct sctp_association *asoc,
+
+ return msg;
+
++errout_chunk_free:
++ sctp_chunk_free(chunk);
++
+ errout:
+ list_for_each_safe(pos, temp, &msg->chunks) {
+ list_del_init(pos);
+diff --git a/net/sctp/endpointola.c b/net/sctp/endpointola.c
+index 905fda5..ca48660 100644
+--- a/net/sctp/endpointola.c
++++ b/net/sctp/endpointola.c
+@@ -249,6 +249,8 @@ void sctp_endpoint_free(struct sctp_endpoint *ep)
+ /* Final destructor for endpoint. */
+ static void sctp_endpoint_destroy(struct sctp_endpoint *ep)
+ {
++ int i;
++
+ SCTP_ASSERT(ep->base.dead, "Endpoint is not dead", return);
+
+ /* Free up the HMAC transform. */
+@@ -271,6 +273,9 @@ static void sctp_endpoint_destroy(struct sctp_endpoint *ep)
+ sctp_inq_free(&ep->base.inqueue);
+ sctp_bind_addr_free(&ep->base.bind_addr);
+
++ for (i = 0; i < SCTP_HOW_MANY_SECRETS; ++i)
++ memset(&ep->secret_key[i], 0, SCTP_SECRET_SIZE);
++
+ /* Remove and free the port */
+ if (sctp_sk(ep->base.sk)->bind_hash)
+ sctp_put_port(ep->base.sk);
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 1f9843e..26ffae2 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -3271,7 +3271,7 @@ static int sctp_setsockopt_auth_key(struct sock *sk,
+
+ ret = sctp_auth_set_key(sctp_sk(sk)->ep, asoc, authkey);
+ out:
+- kfree(authkey);
++ kzfree(authkey);
+ return ret;
+ }
+
+diff --git a/net/socket.c b/net/socket.c
+index d449812..bf9fc68 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -732,9 +732,9 @@ static ssize_t sock_sendpage(struct file *file, struct page *page,
+
+ sock = file->private_data;
+
+- flags = !(file->f_flags & O_NONBLOCK) ? 0 : MSG_DONTWAIT;
+- if (more)
+- flags |= MSG_MORE;
++ flags = (file->f_flags & O_NONBLOCK) ? MSG_DONTWAIT : 0;
++ /* more is a combination of MSG_MORE and MSG_SENDPAGE_NOTLAST */
++ flags |= more;
+
+ return kernel_sendpage(sock, page, offset, size, flags);
+ }
+diff --git a/net/sunrpc/rpc_pipe.c b/net/sunrpc/rpc_pipe.c
+index ea1e6de..43aa601 100644
+--- a/net/sunrpc/rpc_pipe.c
++++ b/net/sunrpc/rpc_pipe.c
+@@ -459,7 +459,7 @@ static int __rpc_create_common(struct inode *dir, struct dentry *dentry,
+ {
+ struct inode *inode;
+
+- BUG_ON(!d_unhashed(dentry));
++ d_drop(dentry);
+ inode = rpc_get_inode(dir->i_sb, mode);
+ if (!inode)
+ goto out_err;
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 8ebf4975..eccb86b 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -800,6 +800,7 @@ static void set_orig_addr(struct msghdr *m, struct tipc_msg *msg)
+ if (addr) {
+ addr->family = AF_TIPC;
+ addr->addrtype = TIPC_ADDR_ID;
++ memset(&addr->addr, 0, sizeof(addr->addr));
+ addr->addr.id.ref = msg_origport(msg);
+ addr->addr.id.node = msg_orignode(msg);
+ addr->addr.name.domain = 0; /* could leave uninitialized */
+@@ -916,6 +917,9 @@ static int recv_msg(struct kiocb *iocb, struct socket *sock,
+ goto exit;
+ }
+
++ /* will be updated in set_orig_addr() if needed */
++ m->msg_namelen = 0;
++
+ restart:
+
+ /* Look for a message in receive queue; wait if necessary */
+@@ -1049,6 +1053,9 @@ static int recv_stream(struct kiocb *iocb, struct socket *sock,
+ goto exit;
+ }
+
++ /* will be updated in set_orig_addr() if needed */
++ m->msg_namelen = 0;
++
+ restart:
+
+ /* Look for a message in receive queue; wait if necessary */
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index db8d51a..d146b76 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -370,7 +370,7 @@ static void unix_sock_destructor(struct sock *sk)
+ #endif
+ }
+
+-static int unix_release_sock(struct sock *sk, int embrion)
++static void unix_release_sock(struct sock *sk, int embrion)
+ {
+ struct unix_sock *u = unix_sk(sk);
+ struct dentry *dentry;
+@@ -445,8 +445,6 @@ static int unix_release_sock(struct sock *sk, int embrion)
+
+ if (unix_tot_inflight)
+ unix_gc(); /* Garbage collect fds */
+-
+- return 0;
+ }
+
+ static int unix_listen(struct socket *sock, int backlog)
+@@ -660,9 +658,10 @@ static int unix_release(struct socket *sock)
+ if (!sk)
+ return 0;
+
++ unix_release_sock(sk, 0);
+ sock->sk = NULL;
+
+- return unix_release_sock(sk, 0);
++ return 0;
+ }
+
+ static int unix_autobind(struct socket *sock)
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index b95a2d6..06f42f6 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -506,6 +506,7 @@ out:
+
+ static void copy_to_user_state(struct xfrm_state *x, struct xfrm_usersa_info *p)
+ {
++ memset(p, 0, sizeof(*p));
+ memcpy(&p->id, &x->id, sizeof(p->id));
+ memcpy(&p->sel, &x->sel, sizeof(p->sel));
+ memcpy(&p->lft, &x->lft, sizeof(p->lft));
+@@ -646,6 +647,7 @@ static struct sk_buff *xfrm_state_netlink(struct sk_buff *in_skb,
+ {
+ struct xfrm_dump_info info;
+ struct sk_buff *skb;
++ int err;
+
+ skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_ATOMIC);
+ if (!skb)
+@@ -656,9 +658,10 @@ static struct sk_buff *xfrm_state_netlink(struct sk_buff *in_skb,
+ info.nlmsg_seq = seq;
+ info.nlmsg_flags = 0;
+
+- if (dump_one_state(x, 0, &info)) {
++ err = dump_one_state(x, 0, &info);
++ if (err) {
+ kfree_skb(skb);
+- return NULL;
++ return ERR_PTR(err);
+ }
+
+ return skb;
+@@ -1075,6 +1078,7 @@ static void copy_from_user_policy(struct xfrm_policy *xp, struct xfrm_userpolicy
+
+ static void copy_to_user_policy(struct xfrm_policy *xp, struct xfrm_userpolicy_info *p, int dir)
+ {
++ memset(p, 0, sizeof(*p));
+ memcpy(&p->sel, &xp->selector, sizeof(p->sel));
+ memcpy(&p->lft, &xp->lft, sizeof(p->lft));
+ memcpy(&p->curlft, &xp->curlft, sizeof(p->curlft));
+@@ -1176,6 +1180,7 @@ static int copy_to_user_tmpl(struct xfrm_policy *xp, struct sk_buff *skb)
+ struct xfrm_user_tmpl *up = &vec[i];
+ struct xfrm_tmpl *kp = &xp->xfrm_vec[i];
+
++ memset(up, 0, sizeof(*up));
+ memcpy(&up->id, &kp->id, sizeof(up->id));
+ up->family = kp->encap_family;
+ memcpy(&up->saddr, &kp->saddr, sizeof(up->saddr));
+@@ -1301,6 +1306,7 @@ static struct sk_buff *xfrm_policy_netlink(struct sk_buff *in_skb,
+ {
+ struct xfrm_dump_info info;
+ struct sk_buff *skb;
++ int err;
+
+ skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
+ if (!skb)
+@@ -1311,9 +1317,10 @@ static struct sk_buff *xfrm_policy_netlink(struct sk_buff *in_skb,
+ info.nlmsg_seq = seq;
+ info.nlmsg_flags = 0;
+
+- if (dump_one_policy(xp, dir, 0, &info) < 0) {
++ err = dump_one_policy(xp, dir, 0, &info);
++ if (err) {
+ kfree_skb(skb);
+- return NULL;
++ return ERR_PTR(err);
+ }
+
+ return skb;
+diff --git a/scripts/Kbuild.include b/scripts/Kbuild.include
+index 92b62a8..5405ff17 100644
+--- a/scripts/Kbuild.include
++++ b/scripts/Kbuild.include
+@@ -94,24 +94,24 @@ try-run = $(shell set -e; \
+ # Usage: cflags-y += $(call as-option,-Wa$(comma)-isa=foo,)
+
+ as-option = $(call try-run,\
+- $(CC) $(KBUILD_CFLAGS) $(1) -c -xassembler /dev/null -o "$$TMP",$(1),$(2))
++ $(CC) $(KBUILD_CFLAGS) $(1) -c -x assembler /dev/null -o "$$TMP",$(1),$(2))
+
+ # as-instr
+ # Usage: cflags-y += $(call as-instr,instr,option1,option2)
+
+ as-instr = $(call try-run,\
+- /bin/echo -e "$(1)" | $(CC) $(KBUILD_AFLAGS) -c -xassembler -o "$$TMP" -,$(2),$(3))
++ /bin/echo -e "$(1)" | $(CC) $(KBUILD_AFLAGS) -c -x assembler -o "$$TMP" -,$(2),$(3))
+
+ # cc-option
+ # Usage: cflags-y += $(call cc-option,-march=winchip-c6,-march=i586)
+
+ cc-option = $(call try-run,\
+- $(CC) $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS) $(1) -c -xc /dev/null -o "$$TMP",$(1),$(2))
++ $(CC) $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS) $(1) -c -x c /dev/null -o "$$TMP",$(1),$(2))
+
+ # cc-option-yn
+ # Usage: flag := $(call cc-option-yn,-march=winchip-c6)
+ cc-option-yn = $(call try-run,\
+- $(CC) $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS) $(1) -c -xc /dev/null -o "$$TMP",y,n)
++ $(CC) $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS) $(1) -c -x c /dev/null -o "$$TMP",y,n)
+
+ # cc-option-align
+ # Prefix align with either -falign or -malign
+@@ -121,7 +121,7 @@ cc-option-align = $(subst -functions=0,,\
+ # cc-disable-warning
+ # Usage: cflags-y += $(call cc-disable-warning,unused-but-set-variable)
+ cc-disable-warning = $(call try-run,\
+- $(CC) $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS) -W$(strip $(1)) -c -xc /dev/null -o "$$TMP",-Wno-$(strip $(1)))
++ $(CC) $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS) -W$(strip $(1)) -c -x c /dev/null -o "$$TMP",-Wno-$(strip $(1)))
+
+ # cc-version
+ # Usage gcc-ver := $(call cc-version)
+@@ -139,7 +139,7 @@ cc-ifversion = $(shell [ $(call cc-version, $(CC)) $(1) $(2) ] && echo $(3))
+ # cc-ldoption
+ # Usage: ldflags += $(call cc-ldoption, -Wl$(comma)--hash-style=both)
+ cc-ldoption = $(call try-run,\
+- $(CC) $(1) -nostdlib -xc /dev/null -o "$$TMP",$(1),$(2))
++ $(CC) $(1) -nostdlib -x c /dev/null -o "$$TMP",$(1),$(2))
+
+ # ld-option
+ # Usage: LDFLAGS += $(call ld-option, -X)
+diff --git a/scripts/gcc-version.sh b/scripts/gcc-version.sh
+index debecb5..7f2126d 100644
+--- a/scripts/gcc-version.sh
++++ b/scripts/gcc-version.sh
+@@ -22,10 +22,10 @@ if [ ${#compiler} -eq 0 ]; then
+ exit 1
+ fi
+
+-MAJOR=$(echo __GNUC__ | $compiler -E -xc - | tail -n 1)
+-MINOR=$(echo __GNUC_MINOR__ | $compiler -E -xc - | tail -n 1)
++MAJOR=$(echo __GNUC__ | $compiler -E -x c - | tail -n 1)
++MINOR=$(echo __GNUC_MINOR__ | $compiler -E -x c - | tail -n 1)
+ if [ "x$with_patchlevel" != "x" ] ; then
+- PATCHLEVEL=$(echo __GNUC_PATCHLEVEL__ | $compiler -E -xc - | tail -n 1)
++ PATCHLEVEL=$(echo __GNUC_PATCHLEVEL__ | $compiler -E -x c - | tail -n 1)
+ printf "%02d%02d%02d\\n" $MAJOR $MINOR $PATCHLEVEL
+ else
+ printf "%02d%02d\\n" $MAJOR $MINOR
+diff --git a/scripts/gcc-x86_32-has-stack-protector.sh b/scripts/gcc-x86_32-has-stack-protector.sh
+index 29493dc..12dbd0b 100644
+--- a/scripts/gcc-x86_32-has-stack-protector.sh
++++ b/scripts/gcc-x86_32-has-stack-protector.sh
+@@ -1,6 +1,6 @@
+ #!/bin/sh
+
+-echo "int foo(void) { char X[200]; return 3; }" | $* -S -xc -c -O0 -fstack-protector - -o - 2> /dev/null | grep -q "%gs"
++echo "int foo(void) { char X[200]; return 3; }" | $* -S -x c -c -O0 -fstack-protector - -o - 2> /dev/null | grep -q "%gs"
+ if [ "$?" -eq "0" ] ; then
+ echo y
+ else
+diff --git a/scripts/gcc-x86_64-has-stack-protector.sh b/scripts/gcc-x86_64-has-stack-protector.sh
+index afaec61..973e8c1 100644
+--- a/scripts/gcc-x86_64-has-stack-protector.sh
++++ b/scripts/gcc-x86_64-has-stack-protector.sh
+@@ -1,6 +1,6 @@
+ #!/bin/sh
+
+-echo "int foo(void) { char X[200]; return 3; }" | $* -S -xc -c -O0 -mcmodel=kernel -fstack-protector - -o - 2> /dev/null | grep -q "%gs"
++echo "int foo(void) { char X[200]; return 3; }" | $* -S -x c -c -O0 -mcmodel=kernel -fstack-protector - -o - 2> /dev/null | grep -q "%gs"
+ if [ "$?" -eq "0" ] ; then
+ echo y
+ else
+diff --git a/scripts/kconfig/check.sh b/scripts/kconfig/check.sh
+index fa59cbf..854d9c7 100755
+--- a/scripts/kconfig/check.sh
++++ b/scripts/kconfig/check.sh
+@@ -1,6 +1,6 @@
+ #!/bin/sh
+ # Needed for systems without gettext
+-$* -xc -o /dev/null - > /dev/null 2>&1 << EOF
++$* -x c -o /dev/null - > /dev/null 2>&1 << EOF
+ #include <libintl.h>
+ int main()
+ {
+diff --git a/scripts/kconfig/lxdialog/check-lxdialog.sh b/scripts/kconfig/lxdialog/check-lxdialog.sh
+index fcef0f5..4bab9e2 100644
+--- a/scripts/kconfig/lxdialog/check-lxdialog.sh
++++ b/scripts/kconfig/lxdialog/check-lxdialog.sh
+@@ -36,7 +36,7 @@ trap "rm -f $tmp" 0 1 2 3 15
+
+ # Check if we can link to ncurses
+ check() {
+- $cc -xc - -o $tmp 2>/dev/null <<'EOF'
++ $cc -x c - -o $tmp 2>/dev/null <<'EOF'
+ #include CURSES_LOC
+ main() {}
+ EOF
+diff --git a/security/keys/process_keys.c b/security/keys/process_keys.c
+index 931cfda..75fb18c 100644
+--- a/security/keys/process_keys.c
++++ b/security/keys/process_keys.c
+@@ -56,7 +56,7 @@ int install_user_keyrings(void)
+
+ kenter("%p{%u}", user, user->uid);
+
+- if (user->uid_keyring) {
++ if (user->uid_keyring && user->session_keyring) {
+ kleave(" = 0 [exist]");
+ return 0;
+ }
+diff --git a/sound/core/seq/seq_timer.c b/sound/core/seq/seq_timer.c
+index f745c31..c2ec4ef 100644
+--- a/sound/core/seq/seq_timer.c
++++ b/sound/core/seq/seq_timer.c
+@@ -291,10 +291,10 @@ int snd_seq_timer_open(struct snd_seq_queue *q)
+ tid.device = SNDRV_TIMER_GLOBAL_SYSTEM;
+ err = snd_timer_open(&t, str, &tid, q->queue);
+ }
+- if (err < 0) {
+- snd_printk(KERN_ERR "seq fatal error: cannot create timer (%i)\n", err);
+- return err;
+- }
++ }
++ if (err < 0) {
++ snd_printk(KERN_ERR "seq fatal error: cannot create timer (%i)\n", err);
++ return err;
+ }
+ t->callback = snd_seq_timer_interrupt;
+ t->callback_data = q;
+diff --git a/sound/pci/ac97/ac97_codec.c b/sound/pci/ac97/ac97_codec.c
+index 78288db..5f295f7 100644
+--- a/sound/pci/ac97/ac97_codec.c
++++ b/sound/pci/ac97/ac97_codec.c
+@@ -1252,6 +1252,8 @@ static int snd_ac97_cvol_new(struct snd_card *card, char *name, int reg, unsigne
+ tmp.index = ac97->num;
+ kctl = snd_ctl_new1(&tmp, ac97);
+ }
++ if (!kctl)
++ return -ENOMEM;
+ if (reg >= AC97_PHONE && reg <= AC97_PCM)
+ set_tlv_db_scale(kctl, db_scale_5bit_12db_max);
+ else
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 6419095..d9b4453 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -131,8 +131,8 @@ enum {
+ enum {
+ ALC269_BASIC,
+ ALC269_QUANTA_FL1,
+- ALC269_ASUS_EEEPC_P703,
+- ALC269_ASUS_EEEPC_P901,
++ ALC269_ASUS_AMIC,
++ ALC269_ASUS_DMIC,
+ ALC269_FUJITSU,
+ ALC269_LIFEBOOK,
+ ALC269_AUTO,
+@@ -188,6 +188,8 @@ enum {
+ ALC663_ASUS_MODE4,
+ ALC663_ASUS_MODE5,
+ ALC663_ASUS_MODE6,
++ ALC663_ASUS_MODE7,
++ ALC663_ASUS_MODE8,
+ ALC272_DELL,
+ ALC272_DELL_ZM1,
+ ALC272_SAMSUNG_NC10,
+@@ -13234,10 +13236,12 @@ static struct hda_verb alc269_eeepc_amic_init_verbs[] = {
+ /* toggle speaker-output according to the hp-jack state */
+ static void alc269_speaker_automute(struct hda_codec *codec)
+ {
++ struct alc_spec *spec = codec->spec;
++ unsigned int nid = spec->autocfg.hp_pins[0];
+ unsigned int present;
+ unsigned char bits;
+
+- present = snd_hda_codec_read(codec, 0x15, 0,
++ present = snd_hda_codec_read(codec, nid, 0,
+ AC_VERB_GET_PIN_SENSE, 0) & 0x80000000;
+ bits = present ? AMP_IN_MUTE(0) : 0;
+ snd_hda_codec_amp_stereo(codec, 0x0c, HDA_INPUT, 0,
+@@ -13463,8 +13467,8 @@ static void alc269_auto_init(struct hda_codec *codec)
+ static const char *alc269_models[ALC269_MODEL_LAST] = {
+ [ALC269_BASIC] = "basic",
+ [ALC269_QUANTA_FL1] = "quanta",
+- [ALC269_ASUS_EEEPC_P703] = "eeepc-p703",
+- [ALC269_ASUS_EEEPC_P901] = "eeepc-p901",
++ [ALC269_ASUS_AMIC] = "asus-amic",
++ [ALC269_ASUS_DMIC] = "asus-dmic",
+ [ALC269_FUJITSU] = "fujitsu",
+ [ALC269_LIFEBOOK] = "lifebook",
+ [ALC269_AUTO] = "auto",
+@@ -13473,18 +13477,41 @@ static const char *alc269_models[ALC269_MODEL_LAST] = {
+ static struct snd_pci_quirk alc269_cfg_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_QUANTA_FL1),
+ SND_PCI_QUIRK(0x1043, 0x8330, "ASUS Eeepc P703 P900A",
+- ALC269_ASUS_EEEPC_P703),
+- SND_PCI_QUIRK(0x1043, 0x1883, "ASUS F81Se", ALC269_ASUS_EEEPC_P703),
+- SND_PCI_QUIRK(0x1043, 0x16a3, "ASUS F5Q", ALC269_ASUS_EEEPC_P703),
+- SND_PCI_QUIRK(0x1043, 0x1723, "ASUS P80", ALC269_ASUS_EEEPC_P703),
+- SND_PCI_QUIRK(0x1043, 0x1773, "ASUS U20A", ALC269_ASUS_EEEPC_P703),
+- SND_PCI_QUIRK(0x1043, 0x1743, "ASUS U80", ALC269_ASUS_EEEPC_P703),
+- SND_PCI_QUIRK(0x1043, 0x1653, "ASUS U50", ALC269_ASUS_EEEPC_P703),
++ ALC269_ASUS_AMIC),
++ SND_PCI_QUIRK(0x1043, 0x1133, "ASUS UJ20ft", ALC269_ASUS_AMIC),
++ SND_PCI_QUIRK(0x1043, 0x1273, "ASUS UL80JT", ALC269_ASUS_AMIC),
++ SND_PCI_QUIRK(0x1043, 0x1283, "ASUS U53Jc", ALC269_ASUS_AMIC),
++ SND_PCI_QUIRK(0x1043, 0x12b3, "ASUS N82Jv", ALC269_ASUS_AMIC),
++ SND_PCI_QUIRK(0x1043, 0x13a3, "ASUS UL30Vt", ALC269_ASUS_AMIC),
++ SND_PCI_QUIRK(0x1043, 0x1373, "ASUS G73JX", ALC269_ASUS_AMIC),
++ SND_PCI_QUIRK(0x1043, 0x1383, "ASUS UJ30Jc", ALC269_ASUS_AMIC),
++ SND_PCI_QUIRK(0x1043, 0x13d3, "ASUS N61JA", ALC269_ASUS_AMIC),
++ SND_PCI_QUIRK(0x1043, 0x1413, "ASUS UL50", ALC269_ASUS_AMIC),
++ SND_PCI_QUIRK(0x1043, 0x1443, "ASUS UL30", ALC269_ASUS_AMIC),
++ SND_PCI_QUIRK(0x1043, 0x1453, "ASUS M60Jv", ALC269_ASUS_AMIC),
++ SND_PCI_QUIRK(0x1043, 0x1483, "ASUS UL80", ALC269_ASUS_AMIC),
++ SND_PCI_QUIRK(0x1043, 0x14f3, "ASUS F83Vf", ALC269_ASUS_AMIC),
++ SND_PCI_QUIRK(0x1043, 0x14e3, "ASUS UL20", ALC269_ASUS_AMIC),
++ SND_PCI_QUIRK(0x1043, 0x1513, "ASUS UX30", ALC269_ASUS_AMIC),
++ SND_PCI_QUIRK(0x1043, 0x15a3, "ASUS N60Jv", ALC269_ASUS_AMIC),
++ SND_PCI_QUIRK(0x1043, 0x15b3, "ASUS N60Dp", ALC269_ASUS_AMIC),
++ SND_PCI_QUIRK(0x1043, 0x15c3, "ASUS N70De", ALC269_ASUS_AMIC),
++ SND_PCI_QUIRK(0x1043, 0x15e3, "ASUS F83T", ALC269_ASUS_AMIC),
++ SND_PCI_QUIRK(0x1043, 0x1643, "ASUS M60J", ALC269_ASUS_AMIC),
++ SND_PCI_QUIRK(0x1043, 0x1653, "ASUS U50", ALC269_ASUS_AMIC),
++ SND_PCI_QUIRK(0x1043, 0x1693, "ASUS F50N", ALC269_ASUS_AMIC),
++ SND_PCI_QUIRK(0x1043, 0x16a3, "ASUS F5Q", ALC269_ASUS_AMIC),
++ SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_ASUS_DMIC),
++ SND_PCI_QUIRK(0x1043, 0x1723, "ASUS P80", ALC269_ASUS_AMIC),
++ SND_PCI_QUIRK(0x1043, 0x1743, "ASUS U80", ALC269_ASUS_AMIC),
++ SND_PCI_QUIRK(0x1043, 0x1773, "ASUS U20A", ALC269_ASUS_AMIC),
++ SND_PCI_QUIRK(0x1043, 0x1883, "ASUS F81Se", ALC269_ASUS_AMIC),
+ SND_PCI_QUIRK(0x1043, 0x831a, "ASUS Eeepc P901",
+- ALC269_ASUS_EEEPC_P901),
++ ALC269_ASUS_DMIC),
+ SND_PCI_QUIRK(0x1043, 0x834a, "ASUS Eeepc S101",
+- ALC269_ASUS_EEEPC_P901),
+- SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_ASUS_EEEPC_P901),
++ ALC269_ASUS_DMIC),
++ SND_PCI_QUIRK(0x1043, 0x8398, "ASUS P1005HA", ALC269_ASUS_DMIC),
++ SND_PCI_QUIRK(0x1043, 0x83ce, "ASUS P1005HA", ALC269_ASUS_DMIC),
+ SND_PCI_QUIRK(0x1734, 0x115d, "FSC Amilo", ALC269_FUJITSU),
+ SND_PCI_QUIRK(0x10cf, 0x1475, "Lifebook ICH9M-based", ALC269_LIFEBOOK),
+ {}
+@@ -13514,7 +13541,7 @@ static struct alc_config_preset alc269_presets[] = {
+ .setup = alc269_quanta_fl1_setup,
+ .init_hook = alc269_quanta_fl1_init_hook,
+ },
+- [ALC269_ASUS_EEEPC_P703] = {
++ [ALC269_ASUS_AMIC] = {
+ .mixers = { alc269_eeepc_mixer },
+ .cap_mixer = alc269_epc_capture_mixer,
+ .init_verbs = { alc269_init_verbs,
+@@ -13528,7 +13555,7 @@ static struct alc_config_preset alc269_presets[] = {
+ .setup = alc269_eeepc_amic_setup,
+ .init_hook = alc269_eeepc_inithook,
+ },
+- [ALC269_ASUS_EEEPC_P901] = {
++ [ALC269_ASUS_DMIC] = {
+ .mixers = { alc269_eeepc_mixer },
+ .cap_mixer = alc269_epc_capture_mixer,
+ .init_verbs = { alc269_init_verbs,
+@@ -14686,6 +14713,27 @@ static struct alc_config_preset alc861_presets[] = {
+ },
+ };
+
++/* Pin config fixes */
++enum {
++ PINFIX_FSC_AMILO_PI1505,
++};
++
++static struct alc_pincfg alc861_fsc_amilo_pi1505_pinfix[] = {
++ { 0x0b, 0x0221101f }, /* HP */
++ { 0x0f, 0x90170310 }, /* speaker */
++ { }
++};
++
++static const struct alc_fixup alc861_fixups[] = {
++ [PINFIX_FSC_AMILO_PI1505] = {
++ .pins = alc861_fsc_amilo_pi1505_pinfix
++ },
++};
++
++static struct snd_pci_quirk alc861_fixup_tbl[] = {
++ SND_PCI_QUIRK(0x1734, 0x10c7, "FSC Amilo Pi1505", PINFIX_FSC_AMILO_PI1505),
++ {}
++};
+
+ static int patch_alc861(struct hda_codec *codec)
+ {
+@@ -14709,6 +14757,8 @@ static int patch_alc861(struct hda_codec *codec)
+ board_config = ALC861_AUTO;
+ }
+
++ alc_pick_fixup(codec, alc861_fixup_tbl, alc861_fixups);
++
+ if (board_config == ALC861_AUTO) {
+ /* automatic parse from the BIOS config */
+ err = alc861_parse_auto_config(codec);
+@@ -16144,6 +16194,52 @@ static struct snd_kcontrol_new alc663_g50v_mixer[] = {
+ { } /* end */
+ };
+
++static struct hda_bind_ctls alc663_asus_mode7_8_all_bind_switch = {
++ .ops = &snd_hda_bind_sw,
++ .values = {
++ HDA_COMPOSE_AMP_VAL(0x14, 3, 0, HDA_OUTPUT),
++ HDA_COMPOSE_AMP_VAL(0x15, 3, 0, HDA_OUTPUT),
++ HDA_COMPOSE_AMP_VAL(0x17, 3, 0, HDA_OUTPUT),
++ HDA_COMPOSE_AMP_VAL(0x1b, 3, 0, HDA_OUTPUT),
++ HDA_COMPOSE_AMP_VAL(0x21, 3, 0, HDA_OUTPUT),
++ 0
++ },
++};
++
++static struct hda_bind_ctls alc663_asus_mode7_8_sp_bind_switch = {
++ .ops = &snd_hda_bind_sw,
++ .values = {
++ HDA_COMPOSE_AMP_VAL(0x14, 3, 0, HDA_OUTPUT),
++ HDA_COMPOSE_AMP_VAL(0x17, 3, 0, HDA_OUTPUT),
++ 0
++ },
++};
++
++static struct snd_kcontrol_new alc663_mode7_mixer[] = {
++ HDA_BIND_SW("Master Playback Switch", &alc663_asus_mode7_8_all_bind_switch),
++ HDA_BIND_VOL("Speaker Playback Volume", &alc663_asus_bind_master_vol),
++ HDA_BIND_SW("Speaker Playback Switch", &alc663_asus_mode7_8_sp_bind_switch),
++ HDA_CODEC_MUTE("Headphone1 Playback Switch", 0x1b, 0x0, HDA_OUTPUT),
++ HDA_CODEC_MUTE("Headphone2 Playback Switch", 0x21, 0x0, HDA_OUTPUT),
++ HDA_CODEC_VOLUME("IntMic Playback Volume", 0x0b, 0x0, HDA_INPUT),
++ HDA_CODEC_MUTE("IntMic Playback Switch", 0x0b, 0x0, HDA_INPUT),
++ HDA_CODEC_VOLUME("Mic Playback Volume", 0x0b, 0x1, HDA_INPUT),
++ HDA_CODEC_MUTE("Mic Playback Switch", 0x0b, 0x1, HDA_INPUT),
++ { } /* end */
++};
++
++static struct snd_kcontrol_new alc663_mode8_mixer[] = {
++ HDA_BIND_SW("Master Playback Switch", &alc663_asus_mode7_8_all_bind_switch),
++ HDA_BIND_VOL("Speaker Playback Volume", &alc663_asus_bind_master_vol),
++ HDA_BIND_SW("Speaker Playback Switch", &alc663_asus_mode7_8_sp_bind_switch),
++ HDA_CODEC_MUTE("Headphone1 Playback Switch", 0x15, 0x0, HDA_OUTPUT),
++ HDA_CODEC_MUTE("Headphone2 Playback Switch", 0x21, 0x0, HDA_OUTPUT),
++ HDA_CODEC_VOLUME("Mic Playback Volume", 0x0b, 0x0, HDA_INPUT),
++ HDA_CODEC_MUTE("Mic Playback Switch", 0x0b, 0x0, HDA_INPUT),
++ { } /* end */
++};
++
++
+ static struct snd_kcontrol_new alc662_chmode_mixer[] = {
+ {
+ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
+@@ -16431,6 +16527,45 @@ static struct hda_verb alc272_dell_init_verbs[] = {
+ {}
+ };
+
++static struct hda_verb alc663_mode7_init_verbs[] = {
++ {0x15, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_IN},
++ {0x16, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_IN},
++ {0x17, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT},
++ {0x17, AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE},
++ {0x1b, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_HP},
++ {0x1b, AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE},
++ {0x1b, AC_VERB_SET_CONNECT_SEL, 0x01},
++ {0x21, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_HP},
++ {0x21, AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE},
++ {0x21, AC_VERB_SET_CONNECT_SEL, 0x01}, /* Headphone */
++ {0x22, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_MUTE(0)},
++ {0x22, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_UNMUTE(9)},
++ {0x19, AC_VERB_SET_UNSOLICITED_ENABLE, AC_USRSP_EN | ALC880_MIC_EVENT},
++ {0x1b, AC_VERB_SET_UNSOLICITED_ENABLE, AC_USRSP_EN | ALC880_HP_EVENT},
++ {0x21, AC_VERB_SET_UNSOLICITED_ENABLE, AC_USRSP_EN | ALC880_HP_EVENT},
++ {}
++};
++
++static struct hda_verb alc663_mode8_init_verbs[] = {
++ {0x12, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_IN},
++ {0x15, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_HP},
++ {0x15, AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE},
++ {0x15, AC_VERB_SET_CONNECT_SEL, 0x01},
++ {0x16, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_IN},
++ {0x17, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT},
++ {0x17, AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE},
++ {0x1b, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_IN},
++ {0x21, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_HP},
++ {0x21, AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE},
++ {0x21, AC_VERB_SET_CONNECT_SEL, 0x01}, /* Headphone */
++ {0x22, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_MUTE(0)},
++ {0x22, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_UNMUTE(9)},
++ {0x15, AC_VERB_SET_UNSOLICITED_ENABLE, AC_USRSP_EN | ALC880_HP_EVENT},
++ {0x18, AC_VERB_SET_UNSOLICITED_ENABLE, AC_USRSP_EN | ALC880_MIC_EVENT},
++ {0x21, AC_VERB_SET_UNSOLICITED_ENABLE, AC_USRSP_EN | ALC880_HP_EVENT},
++ {}
++};
++
+ static struct snd_kcontrol_new alc662_auto_capture_mixer[] = {
+ HDA_CODEC_VOLUME("Capture Volume", 0x09, 0x0, HDA_INPUT),
+ HDA_CODEC_MUTE("Capture Switch", 0x09, 0x0, HDA_INPUT),
+@@ -16626,6 +16761,54 @@ static void alc663_two_hp_m2_speaker_automute(struct hda_codec *codec)
+ }
+ }
+
++static void alc663_two_hp_m7_speaker_automute(struct hda_codec *codec)
++{
++ unsigned int present1, present2;
++
++ present1 = snd_hda_codec_read(codec, 0x1b, 0,
++ AC_VERB_GET_PIN_SENSE, 0)
++ & AC_PINSENSE_PRESENCE;
++ present2 = snd_hda_codec_read(codec, 0x21, 0,
++ AC_VERB_GET_PIN_SENSE, 0)
++ & AC_PINSENSE_PRESENCE;
++
++ if (present1 || present2) {
++ snd_hda_codec_write_cache(codec, 0x14, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, 0);
++ snd_hda_codec_write_cache(codec, 0x17, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, 0);
++ } else {
++ snd_hda_codec_write_cache(codec, 0x14, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
++ snd_hda_codec_write_cache(codec, 0x17, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
++ }
++}
++
++static void alc663_two_hp_m8_speaker_automute(struct hda_codec *codec)
++{
++ unsigned int present1, present2;
++
++ present1 = snd_hda_codec_read(codec, 0x21, 0,
++ AC_VERB_GET_PIN_SENSE, 0)
++ & AC_PINSENSE_PRESENCE;
++ present2 = snd_hda_codec_read(codec, 0x15, 0,
++ AC_VERB_GET_PIN_SENSE, 0)
++ & AC_PINSENSE_PRESENCE;
++
++ if (present1 || present2) {
++ snd_hda_codec_write_cache(codec, 0x14, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, 0);
++ snd_hda_codec_write_cache(codec, 0x17, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, 0);
++ } else {
++ snd_hda_codec_write_cache(codec, 0x14, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
++ snd_hda_codec_write_cache(codec, 0x17, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
++ }
++}
++
+ static void alc663_m51va_unsol_event(struct hda_codec *codec,
+ unsigned int res)
+ {
+@@ -16645,7 +16828,7 @@ static void alc663_m51va_setup(struct hda_codec *codec)
+ spec->ext_mic.pin = 0x18;
+ spec->ext_mic.mux_idx = 0;
+ spec->int_mic.pin = 0x12;
+- spec->int_mic.mux_idx = 1;
++ spec->int_mic.mux_idx = 9;
+ spec->auto_mic = 1;
+ }
+
+@@ -16657,7 +16840,17 @@ static void alc663_m51va_inithook(struct hda_codec *codec)
+
+ /* ***************** Mode1 ******************************/
+ #define alc663_mode1_unsol_event alc663_m51va_unsol_event
+-#define alc663_mode1_setup alc663_m51va_setup
++
++static void alc663_mode1_setup(struct hda_codec *codec)
++{
++ struct alc_spec *spec = codec->spec;
++ spec->ext_mic.pin = 0x18;
++ spec->ext_mic.mux_idx = 0;
++ spec->int_mic.pin = 0x19;
++ spec->int_mic.mux_idx = 1;
++ spec->auto_mic = 1;
++}
++
+ #define alc663_mode1_inithook alc663_m51va_inithook
+
+ /* ***************** Mode2 ******************************/
+@@ -16674,7 +16867,7 @@ static void alc662_mode2_unsol_event(struct hda_codec *codec,
+ }
+ }
+
+-#define alc662_mode2_setup alc663_m51va_setup
++#define alc662_mode2_setup alc663_mode1_setup
+
+ static void alc662_mode2_inithook(struct hda_codec *codec)
+ {
+@@ -16695,7 +16888,7 @@ static void alc663_mode3_unsol_event(struct hda_codec *codec,
+ }
+ }
+
+-#define alc663_mode3_setup alc663_m51va_setup
++#define alc663_mode3_setup alc663_mode1_setup
+
+ static void alc663_mode3_inithook(struct hda_codec *codec)
+ {
+@@ -16716,7 +16909,7 @@ static void alc663_mode4_unsol_event(struct hda_codec *codec,
+ }
+ }
+
+-#define alc663_mode4_setup alc663_m51va_setup
++#define alc663_mode4_setup alc663_mode1_setup
+
+ static void alc663_mode4_inithook(struct hda_codec *codec)
+ {
+@@ -16737,7 +16930,7 @@ static void alc663_mode5_unsol_event(struct hda_codec *codec,
+ }
+ }
+
+-#define alc663_mode5_setup alc663_m51va_setup
++#define alc663_mode5_setup alc663_mode1_setup
+
+ static void alc663_mode5_inithook(struct hda_codec *codec)
+ {
+@@ -16758,7 +16951,7 @@ static void alc663_mode6_unsol_event(struct hda_codec *codec,
+ }
+ }
+
+-#define alc663_mode6_setup alc663_m51va_setup
++#define alc663_mode6_setup alc663_mode1_setup
+
+ static void alc663_mode6_inithook(struct hda_codec *codec)
+ {
+@@ -16766,6 +16959,50 @@ static void alc663_mode6_inithook(struct hda_codec *codec)
+ alc_mic_automute(codec);
+ }
+
++/* ***************** Mode7 ******************************/
++static void alc663_mode7_unsol_event(struct hda_codec *codec,
++ unsigned int res)
++{
++ switch (res >> 26) {
++ case ALC880_HP_EVENT:
++ alc663_two_hp_m7_speaker_automute(codec);
++ break;
++ case ALC880_MIC_EVENT:
++ alc_mic_automute(codec);
++ break;
++ }
++}
++
++#define alc663_mode7_setup alc663_mode1_setup
++
++static void alc663_mode7_inithook(struct hda_codec *codec)
++{
++ alc663_two_hp_m7_speaker_automute(codec);
++ alc_mic_automute(codec);
++}
++
++/* ***************** Mode8 ******************************/
++static void alc663_mode8_unsol_event(struct hda_codec *codec,
++ unsigned int res)
++{
++ switch (res >> 26) {
++ case ALC880_HP_EVENT:
++ alc663_two_hp_m8_speaker_automute(codec);
++ break;
++ case ALC880_MIC_EVENT:
++ alc_mic_automute(codec);
++ break;
++ }
++}
++
++#define alc663_mode8_setup alc663_m51va_setup
++
++static void alc663_mode8_inithook(struct hda_codec *codec)
++{
++ alc663_two_hp_m8_speaker_automute(codec);
++ alc_mic_automute(codec);
++}
++
+ static void alc663_g71v_hp_automute(struct hda_codec *codec)
+ {
+ unsigned int present;
+@@ -16904,6 +17141,8 @@ static const char *alc662_models[ALC662_MODEL_LAST] = {
+ [ALC663_ASUS_MODE4] = "asus-mode4",
+ [ALC663_ASUS_MODE5] = "asus-mode5",
+ [ALC663_ASUS_MODE6] = "asus-mode6",
++ [ALC663_ASUS_MODE7] = "asus-mode7",
++ [ALC663_ASUS_MODE8] = "asus-mode8",
+ [ALC272_DELL] = "dell",
+ [ALC272_DELL_ZM1] = "dell-zm1",
+ [ALC272_SAMSUNG_NC10] = "samsung-nc10",
+@@ -16920,12 +17159,22 @@ static struct snd_pci_quirk alc662_cfg_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x11d3, "ASUS NB", ALC663_ASUS_MODE1),
+ SND_PCI_QUIRK(0x1043, 0x11f3, "ASUS NB", ALC662_ASUS_MODE2),
+ SND_PCI_QUIRK(0x1043, 0x1203, "ASUS NB", ALC663_ASUS_MODE1),
++ SND_PCI_QUIRK(0x1043, 0x1303, "ASUS G60J", ALC663_ASUS_MODE1),
++ SND_PCI_QUIRK(0x1043, 0x1333, "ASUS G60Jx", ALC663_ASUS_MODE1),
+ SND_PCI_QUIRK(0x1043, 0x1339, "ASUS NB", ALC662_ASUS_MODE2),
++ SND_PCI_QUIRK(0x1043, 0x13e3, "ASUS N71JA", ALC663_ASUS_MODE7),
++ SND_PCI_QUIRK(0x1043, 0x1463, "ASUS N71", ALC663_ASUS_MODE7),
++ SND_PCI_QUIRK(0x1043, 0x14d3, "ASUS G72", ALC663_ASUS_MODE8),
++ SND_PCI_QUIRK(0x1043, 0x1563, "ASUS N90", ALC663_ASUS_MODE3),
++ SND_PCI_QUIRK(0x1043, 0x15d3, "ASUS N50SF F50SF", ALC663_ASUS_MODE1),
+ SND_PCI_QUIRK(0x1043, 0x16c3, "ASUS NB", ALC662_ASUS_MODE2),
++ SND_PCI_QUIRK(0x1043, 0x16f3, "ASUS K40C K50C", ALC662_ASUS_MODE2),
++ SND_PCI_QUIRK(0x1043, 0x1733, "ASUS N81De", ALC663_ASUS_MODE1),
+ SND_PCI_QUIRK(0x1043, 0x1753, "ASUS NB", ALC662_ASUS_MODE2),
+ SND_PCI_QUIRK(0x1043, 0x1763, "ASUS NB", ALC663_ASUS_MODE6),
+ SND_PCI_QUIRK(0x1043, 0x1765, "ASUS NB", ALC663_ASUS_MODE6),
+ SND_PCI_QUIRK(0x1043, 0x1783, "ASUS NB", ALC662_ASUS_MODE2),
++ SND_PCI_QUIRK(0x1043, 0x1793, "ASUS F50GX", ALC663_ASUS_MODE1),
+ SND_PCI_QUIRK(0x1043, 0x17b3, "ASUS F70SL", ALC663_ASUS_MODE3),
+ SND_PCI_QUIRK(0x1043, 0x17c3, "ASUS UX20", ALC663_ASUS_M51VA),
+ SND_PCI_QUIRK(0x1043, 0x17f3, "ASUS X58LE", ALC662_ASUS_MODE2),
+@@ -17208,6 +17457,36 @@ static struct alc_config_preset alc662_presets[] = {
+ .setup = alc663_mode6_setup,
+ .init_hook = alc663_mode6_inithook,
+ },
++ [ALC663_ASUS_MODE7] = {
++ .mixers = { alc663_mode7_mixer },
++ .cap_mixer = alc662_auto_capture_mixer,
++ .init_verbs = { alc662_init_verbs,
++ alc663_mode7_init_verbs },
++ .num_dacs = ARRAY_SIZE(alc662_dac_nids),
++ .hp_nid = 0x03,
++ .dac_nids = alc662_dac_nids,
++ .dig_out_nid = ALC662_DIGOUT_NID,
++ .num_channel_mode = ARRAY_SIZE(alc662_3ST_2ch_modes),
++ .channel_mode = alc662_3ST_2ch_modes,
++ .unsol_event = alc663_mode7_unsol_event,
++ .setup = alc663_mode7_setup,
++ .init_hook = alc663_mode7_inithook,
++ },
++ [ALC663_ASUS_MODE8] = {
++ .mixers = { alc663_mode8_mixer },
++ .cap_mixer = alc662_auto_capture_mixer,
++ .init_verbs = { alc662_init_verbs,
++ alc663_mode8_init_verbs },
++ .num_dacs = ARRAY_SIZE(alc662_dac_nids),
++ .hp_nid = 0x03,
++ .dac_nids = alc662_dac_nids,
++ .dig_out_nid = ALC662_DIGOUT_NID,
++ .num_channel_mode = ARRAY_SIZE(alc662_3ST_2ch_modes),
++ .channel_mode = alc662_3ST_2ch_modes,
++ .unsol_event = alc663_mode8_unsol_event,
++ .setup = alc663_mode8_setup,
++ .init_hook = alc663_mode8_inithook,
++ },
+ [ALC272_DELL] = {
+ .mixers = { alc663_m51va_mixer },
+ .cap_mixer = alc272_auto_capture_mixer,
+@@ -17676,7 +17955,9 @@ static struct hda_codec_preset snd_hda_preset_realtek[] = {
+ { .id = 0x10ec0267, .name = "ALC267", .patch = patch_alc268 },
+ { .id = 0x10ec0268, .name = "ALC268", .patch = patch_alc268 },
+ { .id = 0x10ec0269, .name = "ALC269", .patch = patch_alc269 },
++ { .id = 0x10ec0270, .name = "ALC270", .patch = patch_alc269 },
+ { .id = 0x10ec0272, .name = "ALC272", .patch = patch_alc662 },
++ { .id = 0x10ec0275, .name = "ALC275", .patch = patch_alc269 },
+ { .id = 0x10ec0861, .rev = 0x100340, .name = "ALC660",
+ .patch = patch_alc861 },
+ { .id = 0x10ec0660, .name = "ALC660-VD", .patch = patch_alc861vd },
+diff --git a/usr/gen_init_cpio.c b/usr/gen_init_cpio.c
+index 83b3dde..13cd679 100644
+--- a/usr/gen_init_cpio.c
++++ b/usr/gen_init_cpio.c
+@@ -299,7 +299,7 @@ static int cpio_mkfile(const char *name, const char *location,
+ int retval;
+ int rc = -1;
+ int namesize;
+- int i;
++ unsigned int i;
+
+ mode |= S_IFREG;
+
+@@ -372,25 +372,28 @@ error:
+
+ static char *cpio_replace_env(char *new_location)
+ {
+- char expanded[PATH_MAX + 1];
+- char env_var[PATH_MAX + 1];
+- char *start;
+- char *end;
+-
+- for (start = NULL; (start = strstr(new_location, "${")); ) {
+- end = strchr(start, '}');
+- if (start < end) {
+- *env_var = *expanded = '\0';
+- strncat(env_var, start + 2, end - start - 2);
+- strncat(expanded, new_location, start - new_location);
+- strncat(expanded, getenv(env_var), PATH_MAX);
+- strncat(expanded, end + 1, PATH_MAX);
+- strncpy(new_location, expanded, PATH_MAX);
+- } else
+- break;
+- }
+-
+- return new_location;
++ char expanded[PATH_MAX + 1];
++ char env_var[PATH_MAX + 1];
++ char *start;
++ char *end;
++
++ for (start = NULL; (start = strstr(new_location, "${")); ) {
++ end = strchr(start, '}');
++ if (start < end) {
++ *env_var = *expanded = '\0';
++ strncat(env_var, start + 2, end - start - 2);
++ strncat(expanded, new_location, start - new_location);
++ strncat(expanded, getenv(env_var),
++ PATH_MAX - strlen(expanded));
++ strncat(expanded, end + 1,
++ PATH_MAX - strlen(expanded));
++ strncpy(new_location, expanded, PATH_MAX);
++ new_location[PATH_MAX] = 0;
++ } else
++ break;
++ }
++
++ return new_location;
+ }
+
+
+diff --git a/virt/kvm/ioapic.c b/virt/kvm/ioapic.c
+index 9fe140b..69969ae 100644
+--- a/virt/kvm/ioapic.c
++++ b/virt/kvm/ioapic.c
+@@ -71,9 +71,12 @@ static unsigned long ioapic_read_indirect(struct kvm_ioapic *ioapic,
+ u32 redir_index = (ioapic->ioregsel - 0x10) >> 1;
+ u64 redir_content;
+
+- ASSERT(redir_index < IOAPIC_NUM_PINS);
++ if (redir_index < IOAPIC_NUM_PINS)
++ redir_content =
++ ioapic->redirtbl[redir_index].bits;
++ else
++ redir_content = ~0ULL;
+
+- redir_content = ioapic->redirtbl[redir_index].bits;
+ result = (ioapic->ioregsel & 0x1) ?
+ (redir_content >> 32) & 0xffffffff :
+ redir_content & 0xffffffff;
Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/all/stable/2.6.32.62.patch
==============================================================================
--- /dev/null 00:00:00 1970 (empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/all/stable/2.6.32.62.patch Tue Nov 25 16:37:48 2014 (r22085)
@@ -0,0 +1,4335 @@
+diff --git a/Makefile b/Makefile
+index e5a279c..76c3b6c 100644
+diff --git a/arch/ia64/include/asm/processor.h b/arch/ia64/include/asm/processor.h
+index 3eaeedf..d77b342 100644
+--- a/arch/ia64/include/asm/processor.h
++++ b/arch/ia64/include/asm/processor.h
+@@ -361,7 +361,7 @@ struct thread_struct {
+ regs->loadrs = 0; \
+ regs->r8 = get_dumpable(current->mm); /* set "don't zap registers" flag */ \
+ regs->r12 = new_sp - 16; /* allocate 16 byte scratch area */ \
+- if (unlikely(!get_dumpable(current->mm))) { \
++ if (unlikely(get_dumpable(current->mm) != SUID_DUMP_USER)) { \
+ /* \
+ * Zap scratch regs to avoid leaking bits between processes with different \
+ * uid/privileges. \
+diff --git a/arch/s390/kernel/head64.S b/arch/s390/kernel/head64.S
+index d984a2a..5b27ed0 100644
+--- a/arch/s390/kernel/head64.S
++++ b/arch/s390/kernel/head64.S
+@@ -124,7 +124,7 @@ startup_continue:
+ .quad 0 # cr12: tracing off
+ .quad 0 # cr13: home space segment table
+ .quad 0xc0000000 # cr14: machine check handling off
+- .quad 0 # cr15: linkage stack operations
++ .quad .Llinkage_stack # cr15: linkage stack operations
+ .Lpcmsk:.quad 0x0000000180000000
+ .L4malign:.quad 0xffffffffffc00000
+ .Lscan2g:.quad 0x80000000 + 0x20000 - 8 # 2GB + 128K - 8
+@@ -139,12 +139,15 @@ startup_continue:
+ .Lparmaddr:
+ .quad PARMAREA
+ .align 64
+-.Lduct: .long 0,0,0,0,.Lduald,0,0,0
++.Lduct: .long 0,.Laste,.Laste,0,.Lduald,0,0,0
+ .long 0,0,0,0,0,0,0,0
++.Laste: .quad 0,0xffffffffffffffff,0,0,0,0,0,0
+ .align 128
+ .Lduald:.rept 8
+ .long 0x80000000,0,0,0 # invalid access-list entries
+ .endr
++.Llinkage_stack:
++ .long 0,0,0x89000000,0,0,0,0x8a000000,0
+
+ .org 0x12000
+ .globl _ehead
+diff --git a/arch/um/kernel/exitcode.c b/arch/um/kernel/exitcode.c
+index 6540d2c..ce057af 100644
+--- a/arch/um/kernel/exitcode.c
++++ b/arch/um/kernel/exitcode.c
+@@ -42,9 +42,11 @@ static int write_proc_exitcode(struct file *file, const char __user *buffer,
+ unsigned long count, void *data)
+ {
+ char *end, buf[sizeof("nnnnn\0")];
++ size_t size;
+ int tmp;
+
+- if (copy_from_user(buf, buffer, count))
++ size = min(count, sizeof(buf));
++ if (copy_from_user(buf, buffer, size))
+ return -EFAULT;
+
+ tmp = simple_strtol(buf, &end, 0);
+diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h
+index 0b20bbb..cb42fad 100644
+--- a/arch/x86/include/asm/i387.h
++++ b/arch/x86/include/asm/i387.h
+@@ -242,12 +242,13 @@ clear_state:
+ /* AMD K7/K8 CPUs don't save/restore FDP/FIP/FOP unless an exception
+ is pending. Clear the x87 state here by setting it to fixed
+ values. safe_address is a random variable that should be in L1 */
+- alternative_input(
+- GENERIC_NOP8 GENERIC_NOP2,
+- "emms\n\t" /* clear stack tags */
+- "fildl %[addr]", /* set F?P to defined value */
+- X86_FEATURE_FXSAVE_LEAK,
+- [addr] "m" (safe_address));
++ if (unlikely(boot_cpu_has(X86_FEATURE_FXSAVE_LEAK))) {
++ asm volatile(
++ "fnclex\n\t"
++ "emms\n\t"
++ "fildl %[addr]" /* set F?P to defined value */
++ : : [addr] "m" (safe_address));
++ }
+ end:
+ task_thread_info(tsk)->status &= ~TS_USEDFPU;
+ }
+diff --git a/arch/x86/include/asm/ptrace.h b/arch/x86/include/asm/ptrace.h
+index e668d72..1ec926d 100644
+--- a/arch/x86/include/asm/ptrace.h
++++ b/arch/x86/include/asm/ptrace.h
+@@ -2,7 +2,6 @@
+ #define _ASM_X86_PTRACE_H
+
+ #include <linux/compiler.h> /* For __user */
+-#include <linux/linkage.h> /* For asmregparm */
+ #include <asm/ptrace-abi.h>
+ #include <asm/processor-flags.h>
+
+@@ -143,9 +142,6 @@ extern void send_sigtrap(struct task_struct *tsk, struct pt_regs *regs,
+ int error_code, int si_code);
+ void signal_fault(struct pt_regs *regs, void __user *frame, char *where);
+
+-extern asmregparm long syscall_trace_enter(struct pt_regs *);
+-extern asmregparm void syscall_trace_leave(struct pt_regs *);
+-
+ static inline unsigned long regs_return_value(struct pt_regs *regs)
+ {
+ return regs->ax;
+diff --git a/arch/x86/kernel/cpu/cpufreq/powernow-k6.c b/arch/x86/kernel/cpu/cpufreq/powernow-k6.c
+index f10dea4..eb890f1 100644
+--- a/arch/x86/kernel/cpu/cpufreq/powernow-k6.c
++++ b/arch/x86/kernel/cpu/cpufreq/powernow-k6.c
+@@ -26,41 +26,108 @@
+ static unsigned int busfreq; /* FSB, in 10 kHz */
+ static unsigned int max_multiplier;
+
++static unsigned int param_busfreq = 0;
++static unsigned int param_max_multiplier = 0;
++
++module_param_named(max_multiplier, param_max_multiplier, uint, S_IRUGO);
++MODULE_PARM_DESC(max_multiplier, "Maximum multiplier (allowed values: 20 30 35 40 45 50 55 60)");
++
++module_param_named(bus_frequency, param_busfreq, uint, S_IRUGO);
++MODULE_PARM_DESC(bus_frequency, "Bus frequency in kHz");
+
+ /* Clock ratio multiplied by 10 - see table 27 in AMD#23446 */
+ static struct cpufreq_frequency_table clock_ratio[] = {
+- {45, /* 000 -> 4.5x */ 0},
++ {60, /* 110 -> 6.0x */ 0},
++ {55, /* 011 -> 5.5x */ 0},
+ {50, /* 001 -> 5.0x */ 0},
++ {45, /* 000 -> 4.5x */ 0},
+ {40, /* 010 -> 4.0x */ 0},
+- {55, /* 011 -> 5.5x */ 0},
+- {20, /* 100 -> 2.0x */ 0},
+- {30, /* 101 -> 3.0x */ 0},
+- {60, /* 110 -> 6.0x */ 0},
+ {35, /* 111 -> 3.5x */ 0},
++ {30, /* 101 -> 3.0x */ 0},
++ {20, /* 100 -> 2.0x */ 0},
+ {0, CPUFREQ_TABLE_END}
+ };
+
++static const u8 index_to_register[8] = { 6, 3, 1, 0, 2, 7, 5, 4 };
++static const u8 register_to_index[8] = { 3, 2, 4, 1, 7, 6, 0, 5 };
++
++static const struct {
++ unsigned freq;
++ unsigned mult;
++} usual_frequency_table[] = {
++ { 400000, 40 }, // 100 * 4
++ { 450000, 45 }, // 100 * 4.5
++ { 475000, 50 }, // 95 * 5
++ { 500000, 50 }, // 100 * 5
++ { 506250, 45 }, // 112.5 * 4.5
++ { 533500, 55 }, // 97 * 5.5
++ { 550000, 55 }, // 100 * 5.5
++ { 562500, 50 }, // 112.5 * 5
++ { 570000, 60 }, // 95 * 6
++ { 600000, 60 }, // 100 * 6
++ { 618750, 55 }, // 112.5 * 5.5
++ { 660000, 55 }, // 120 * 5.5
++ { 675000, 60 }, // 112.5 * 6
++ { 720000, 60 }, // 120 * 6
++};
++
++#define FREQ_RANGE 3000
+
+ /**
+ * powernow_k6_get_cpu_multiplier - returns the current FSB multiplier
+ *
+- * Returns the current setting of the frequency multiplier. Core clock
++ * Returns the current setting of the frequency multiplier. Core clock
+ * speed is frequency of the Front-Side Bus multiplied with this value.
+ */
+ static int powernow_k6_get_cpu_multiplier(void)
+ {
+- u64 invalue = 0;
++ unsigned long invalue = 0;
+ u32 msrval;
+
++ local_irq_disable();
++
+ msrval = POWERNOW_IOPORT + 0x1;
+ wrmsr(MSR_K6_EPMR, msrval, 0); /* enable the PowerNow port */
+ invalue = inl(POWERNOW_IOPORT + 0x8);
+ msrval = POWERNOW_IOPORT + 0x0;
+ wrmsr(MSR_K6_EPMR, msrval, 0); /* disable it again */
+
+- return clock_ratio[(invalue >> 5)&7].index;
++ local_irq_enable();
++
++ return clock_ratio[register_to_index[(invalue >> 5)&7]].index;
+ }
+
++static void powernow_k6_set_cpu_multiplier(unsigned int best_i)
++{
++ unsigned long outvalue, invalue;
++ unsigned long msrval;
++ unsigned long cr0;
++
++ /* we now need to transform best_i to the BVC format, see AMD#23446 */
++
++ /*
++ * The processor doesn't respond to inquiry cycles while changing the
++ * frequency, so we must disable cache.
++ */
++ local_irq_disable();
++ cr0 = read_cr0();
++ write_cr0(cr0 | X86_CR0_CD);
++ wbinvd();
++
++ outvalue = (1<<12) | (1<<10) | (1<<9) | (index_to_register[best_i]<<5);
++
++ msrval = POWERNOW_IOPORT + 0x1;
++ wrmsr(MSR_K6_EPMR, msrval, 0); /* enable the PowerNow port */
++ invalue = inl(POWERNOW_IOPORT + 0x8);
++ invalue = invalue & 0x1f;
++ outvalue = outvalue | invalue;
++ outl(outvalue, (POWERNOW_IOPORT + 0x8));
++ msrval = POWERNOW_IOPORT + 0x0;
++ wrmsr(MSR_K6_EPMR, msrval, 0); /* disable it again */
++
++ write_cr0(cr0);
++ local_irq_enable();
++}
+
+ /**
+ * powernow_k6_set_state - set the PowerNow! multiplier
+@@ -70,8 +137,6 @@ static int powernow_k6_get_cpu_multiplier(void)
+ */
+ static void powernow_k6_set_state(unsigned int best_i)
+ {
+- unsigned long outvalue = 0, invalue = 0;
+- unsigned long msrval;
+ struct cpufreq_freqs freqs;
+
+ if (clock_ratio[best_i].index > max_multiplier) {
+@@ -85,18 +150,7 @@ static void powernow_k6_set_state(unsigned int best_i)
+
+ cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE);
+
+- /* we now need to transform best_i to the BVC format, see AMD#23446 */
+-
+- outvalue = (1<<12) | (1<<10) | (1<<9) | (best_i<<5);
+-
+- msrval = POWERNOW_IOPORT + 0x1;
+- wrmsr(MSR_K6_EPMR, msrval, 0); /* enable the PowerNow port */
+- invalue = inl(POWERNOW_IOPORT + 0x8);
+- invalue = invalue & 0xf;
+- outvalue = outvalue | invalue;
+- outl(outvalue , (POWERNOW_IOPORT + 0x8));
+- msrval = POWERNOW_IOPORT + 0x0;
+- wrmsr(MSR_K6_EPMR, msrval, 0); /* disable it again */
++ powernow_k6_set_cpu_multiplier(best_i);
+
+ cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
+
+@@ -141,18 +195,57 @@ static int powernow_k6_target(struct cpufreq_policy *policy,
+ return 0;
+ }
+
+-
+ static int powernow_k6_cpu_init(struct cpufreq_policy *policy)
+ {
+ unsigned int i, f;
+ int result;
++ unsigned khz;
+
+ if (policy->cpu != 0)
+ return -ENODEV;
+
+- /* get frequencies */
+- max_multiplier = powernow_k6_get_cpu_multiplier();
+- busfreq = cpu_khz / max_multiplier;
++ max_multiplier = 0;
++ khz = cpu_khz;
++ for (i = 0; i < ARRAY_SIZE(usual_frequency_table); i++) {
++ if (khz >= usual_frequency_table[i].freq - FREQ_RANGE &&
++ khz <= usual_frequency_table[i].freq + FREQ_RANGE) {
++ khz = usual_frequency_table[i].freq;
++ max_multiplier = usual_frequency_table[i].mult;
++ break;
++ }
++ }
++ if (param_max_multiplier) {
++ for (i = 0; (clock_ratio[i].frequency != CPUFREQ_TABLE_END); i++) {
++ if (clock_ratio[i].index == param_max_multiplier) {
++ max_multiplier = param_max_multiplier;
++ goto have_max_multiplier;
++ }
++ }
++ printk(KERN_ERR "powernow-k6: invalid max_multiplier parameter, valid parameters 20, 30, 35, 40, 45, 50, 55, 60\n");
++ return -EINVAL;
++ }
++
++ if (!max_multiplier) {
++ printk(KERN_WARNING "powernow-k6: unknown frequency %u, cannot determine current multiplier\n", khz);
++ printk(KERN_WARNING "powernow-k6: use module parameters max_multiplier and bus_frequency\n");
++ return -EOPNOTSUPP;
++ }
++
++have_max_multiplier:
++ param_max_multiplier = max_multiplier;
++
++ if (param_busfreq) {
++ if (param_busfreq >= 50000 && param_busfreq <= 150000) {
++ busfreq = param_busfreq / 10;
++ goto have_busfreq;
++ }
++ printk(KERN_ERR "powernow-k6: invalid bus_frequency parameter, allowed range 50000 - 150000 kHz\n");
++ return -EINVAL;
++ }
++
++ busfreq = khz / max_multiplier;
++have_busfreq:
++ param_busfreq = busfreq * 10;
+
+ /* table init */
+ for (i = 0; (clock_ratio[i].frequency != CPUFREQ_TABLE_END); i++) {
+@@ -164,7 +257,7 @@ static int powernow_k6_cpu_init(struct cpufreq_policy *policy)
+ }
+
+ /* cpuinfo and default policy values */
+- policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
++ policy->cpuinfo.transition_latency = 500000;
+ policy->cur = busfreq * max_multiplier;
+
+ result = cpufreq_frequency_table_cpuinfo(policy, clock_ratio);
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 8dfeaaa..b77857f 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -519,7 +519,8 @@ static u32 apic_get_tmcct(struct kvm_lapic *apic)
+ ASSERT(apic != NULL);
+
+ /* if initial count is 0, current count should also be 0 */
+- if (apic_get_reg(apic, APIC_TMICT) == 0)
++ if (apic_get_reg(apic, APIC_TMICT) == 0 ||
++ apic->lapic_timer.period == 0)
+ return 0;
+
+ remaining = hrtimer_get_remaining(&apic->lapic_timer.timer);
+diff --git a/crypto/ansi_cprng.c b/crypto/ansi_cprng.c
+index 3aa6e38..0ffd5995 100644
+--- a/crypto/ansi_cprng.c
++++ b/crypto/ansi_cprng.c
+@@ -232,11 +232,11 @@ remainder:
+ */
+ if (byte_count < DEFAULT_BLK_SZ) {
+ empty_rbuf:
+- for (; ctx->rand_data_valid < DEFAULT_BLK_SZ;
+- ctx->rand_data_valid++) {
++ while (ctx->rand_data_valid < DEFAULT_BLK_SZ) {
+ *ptr = ctx->rand_data[ctx->rand_data_valid];
+ ptr++;
+ byte_count--;
++ ctx->rand_data_valid++;
+ if (byte_count == 0)
+ goto done;
+ }
+diff --git a/crypto/api.c b/crypto/api.c
+index 798526d..f4be65f 100644
+--- a/crypto/api.c
++++ b/crypto/api.c
+@@ -40,6 +40,8 @@ static inline struct crypto_alg *crypto_alg_get(struct crypto_alg *alg)
+ return alg;
+ }
+
++static struct crypto_alg *crypto_larval_wait(struct crypto_alg *alg);
++
+ struct crypto_alg *crypto_mod_get(struct crypto_alg *alg)
+ {
+ return try_module_get(alg->cra_module) ? crypto_alg_get(alg) : NULL;
+@@ -150,8 +152,11 @@ static struct crypto_alg *crypto_larval_add(const char *name, u32 type,
+ }
+ up_write(&crypto_alg_sem);
+
+- if (alg != &larval->alg)
++ if (alg != &larval->alg) {
+ kfree(larval);
++ if (crypto_is_larval(alg))
++ alg = crypto_larval_wait(alg);
++ }
+
+ return alg;
+ }
+diff --git a/drivers/atm/idt77252.c b/drivers/atm/idt77252.c
+index e33ae00..adbaed5 100644
+--- a/drivers/atm/idt77252.c
++++ b/drivers/atm/idt77252.c
+@@ -3557,6 +3557,7 @@ init_card(struct atm_dev *dev)
+ if (tmp) {
+ memcpy(card->atmdev->esi, tmp->dev_addr, 6);
+
++ dev_put(tmp);
+ printk("%s: ESI %02x:%02x:%02x:%02x:%02x:%02x\n",
+ card->name, card->atmdev->esi[0], card->atmdev->esi[1],
+ card->atmdev->esi[2], card->atmdev->esi[3],
+diff --git a/drivers/block/cciss.c b/drivers/block/cciss.c
+index 68b90d9..b2225ab 100644
+--- a/drivers/block/cciss.c
++++ b/drivers/block/cciss.c
+@@ -1051,6 +1051,7 @@ static int cciss_ioctl32_big_passthru(struct block_device *bdev, fmode_t mode,
+ int err;
+ u32 cp;
+
++ memset(&arg64, 0, sizeof(arg64));
+ err = 0;
+ err |=
+ copy_from_user(&arg64.LUN_info, &arg32->LUN_info,
+diff --git a/drivers/block/cpqarray.c b/drivers/block/cpqarray.c
+index 6422651..f9caa45 100644
+--- a/drivers/block/cpqarray.c
++++ b/drivers/block/cpqarray.c
+@@ -1181,6 +1181,7 @@ out_passthru:
+ ida_pci_info_struct pciinfo;
+
+ if (!arg) return -EINVAL;
++ memset(&pciinfo, 0, sizeof(pciinfo));
+ pciinfo.bus = host->pci_dev->bus->number;
+ pciinfo.dev_fn = host->pci_dev->devfn;
+ pciinfo.board_id = host->board_id;
+diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
+index 5c01f74..f959aad 100644
+--- a/drivers/block/floppy.c
++++ b/drivers/block/floppy.c
+@@ -3162,7 +3162,12 @@ static inline int raw_cmd_copyout(int cmd, char __user *param,
+ int ret;
+
+ while (ptr) {
+- COPYOUT(*ptr);
++ struct floppy_raw_cmd cmd = *ptr;
++ cmd.next = NULL;
++ cmd.kernel_data = NULL;
++ ret = copy_to_user((void __user *)param, &cmd, sizeof(cmd));
++ if (ret)
++ return -EFAULT;
+ param += sizeof(struct floppy_raw_cmd);
+ if ((ptr->flags & FD_RAW_READ) && ptr->buffer_length) {
+ if (ptr->length >= 0
+@@ -3209,9 +3214,12 @@ static inline int raw_cmd_copyin(int cmd, char __user *param,
+ if (!ptr)
+ return -ENOMEM;
+ *rcmd = ptr;
+- COPYIN(*ptr);
++ ret = copy_from_user(ptr, (void __user *)param, sizeof(*ptr));
+ ptr->next = NULL;
+ ptr->buffer_length = 0;
++ ptr->kernel_data = NULL;
++ if (ret)
++ return -EFAULT;
+ param += sizeof(struct floppy_raw_cmd);
+ if (ptr->cmd_count > 33)
+ /* the command may now also take up the space
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 26ada47..90550ba 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -655,7 +655,9 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *lo,
+
+ mutex_unlock(&lo->tx_lock);
+
+- thread = kthread_create(nbd_thread, lo, lo->disk->disk_name);
++ thread = kthread_create(nbd_thread, lo, "%s",
++ lo->disk->disk_name);
++
+ if (IS_ERR(thread)) {
+ mutex_lock(&lo->tx_lock);
+ return PTR_ERR(thread);
+diff --git a/drivers/cdrom/cdrom.c b/drivers/cdrom/cdrom.c
+index a4592ec..71a78dc 100644
+--- a/drivers/cdrom/cdrom.c
++++ b/drivers/cdrom/cdrom.c
+@@ -2822,7 +2822,7 @@ static noinline int mmc_ioctl_cdrom_read_data(struct cdrom_device_info *cdi,
+ if (lba < 0)
+ return -EINVAL;
+
+- cgc->buffer = kmalloc(blocksize, GFP_KERNEL);
++ cgc->buffer = kzalloc(blocksize, GFP_KERNEL);
+ if (cgc->buffer == NULL)
+ return -ENOMEM;
+
+diff --git a/drivers/char/n_tty.c b/drivers/char/n_tty.c
+index 2e50f4d..5269fa0 100644
+--- a/drivers/char/n_tty.c
++++ b/drivers/char/n_tty.c
+@@ -1969,7 +1969,9 @@ static ssize_t n_tty_write(struct tty_struct *tty, struct file *file,
+ tty->ops->flush_chars(tty);
+ } else {
+ while (nr > 0) {
++ mutex_lock(&tty->output_lock);
+ c = tty->ops->write(tty, b, nr);
++ mutex_unlock(&tty->output_lock);
+ if (c < 0) {
+ retval = c;
+ goto break_out;
+diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c
+index 6069790..3603599 100644
+--- a/drivers/connector/cn_proc.c
++++ b/drivers/connector/cn_proc.c
+@@ -59,6 +59,7 @@ void proc_fork_connector(struct task_struct *task)
+
+ msg = (struct cn_msg*)buffer;
+ ev = (struct proc_event*)msg->data;
++ memset(&ev->event_data, 0, sizeof(ev->event_data));
+ get_seq(&msg->seq, &ev->cpu);
+ ktime_get_ts(&ts); /* get high res monotonic timestamp */
+ put_unaligned(timespec_to_ns(&ts), (__u64 *)&ev->timestamp_ns);
+@@ -71,6 +72,7 @@ void proc_fork_connector(struct task_struct *task)
+ memcpy(&msg->id, &cn_proc_event_id, sizeof(msg->id));
+ msg->ack = 0; /* not used */
+ msg->len = sizeof(*ev);
++ msg->flags = 0; /* not used */
+ /* If cn_netlink_send() failed, the data is not sent */
+ cn_netlink_send(msg, CN_IDX_PROC, GFP_KERNEL);
+ }
+@@ -87,6 +89,7 @@ void proc_exec_connector(struct task_struct *task)
+
+ msg = (struct cn_msg*)buffer;
+ ev = (struct proc_event*)msg->data;
++ memset(&ev->event_data, 0, sizeof(ev->event_data));
+ get_seq(&msg->seq, &ev->cpu);
+ ktime_get_ts(&ts); /* get high res monotonic timestamp */
+ put_unaligned(timespec_to_ns(&ts), (__u64 *)&ev->timestamp_ns);
+@@ -97,6 +100,7 @@ void proc_exec_connector(struct task_struct *task)
+ memcpy(&msg->id, &cn_proc_event_id, sizeof(msg->id));
+ msg->ack = 0; /* not used */
+ msg->len = sizeof(*ev);
++ msg->flags = 0; /* not used */
+ cn_netlink_send(msg, CN_IDX_PROC, GFP_KERNEL);
+ }
+
+@@ -113,6 +117,7 @@ void proc_id_connector(struct task_struct *task, int which_id)
+
+ msg = (struct cn_msg*)buffer;
+ ev = (struct proc_event*)msg->data;
++ memset(&ev->event_data, 0, sizeof(ev->event_data));
+ ev->what = which_id;
+ ev->event_data.id.process_pid = task->pid;
+ ev->event_data.id.process_tgid = task->tgid;
+@@ -136,6 +141,7 @@ void proc_id_connector(struct task_struct *task, int which_id)
+ memcpy(&msg->id, &cn_proc_event_id, sizeof(msg->id));
+ msg->ack = 0; /* not used */
+ msg->len = sizeof(*ev);
++ msg->flags = 0; /* not used */
+ cn_netlink_send(msg, CN_IDX_PROC, GFP_KERNEL);
+ }
+
+@@ -151,6 +157,7 @@ void proc_sid_connector(struct task_struct *task)
+
+ msg = (struct cn_msg *)buffer;
+ ev = (struct proc_event *)msg->data;
++ memset(&ev->event_data, 0, sizeof(ev->event_data));
+ get_seq(&msg->seq, &ev->cpu);
+ ktime_get_ts(&ts); /* get high res monotonic timestamp */
+ put_unaligned(timespec_to_ns(&ts), (__u64 *)&ev->timestamp_ns);
+@@ -161,6 +168,7 @@ void proc_sid_connector(struct task_struct *task)
+ memcpy(&msg->id, &cn_proc_event_id, sizeof(msg->id));
+ msg->ack = 0; /* not used */
+ msg->len = sizeof(*ev);
++ msg->flags = 0; /* not used */
+ cn_netlink_send(msg, CN_IDX_PROC, GFP_KERNEL);
+ }
+
+@@ -176,8 +184,10 @@ void proc_exit_connector(struct task_struct *task)
+
+ msg = (struct cn_msg*)buffer;
+ ev = (struct proc_event*)msg->data;
++ memset(&ev->event_data, 0, sizeof(ev->event_data));
+ get_seq(&msg->seq, &ev->cpu);
+ ktime_get_ts(&ts); /* get high res monotonic timestamp */
++ memset(&ev->event_data, 0, sizeof(ev->event_data));
+ put_unaligned(timespec_to_ns(&ts), (__u64 *)&ev->timestamp_ns);
+ ev->what = PROC_EVENT_EXIT;
+ ev->event_data.exit.process_pid = task->pid;
+@@ -188,6 +198,7 @@ void proc_exit_connector(struct task_struct *task)
+ memcpy(&msg->id, &cn_proc_event_id, sizeof(msg->id));
+ msg->ack = 0; /* not used */
+ msg->len = sizeof(*ev);
++ msg->flags = 0; /* not used */
+ cn_netlink_send(msg, CN_IDX_PROC, GFP_KERNEL);
+ }
+
+@@ -211,6 +222,7 @@ static void cn_proc_ack(int err, int rcvd_seq, int rcvd_ack)
+
+ msg = (struct cn_msg*)buffer;
+ ev = (struct proc_event*)msg->data;
++ memset(&ev->event_data, 0, sizeof(ev->event_data));
+ msg->seq = rcvd_seq;
+ ktime_get_ts(&ts); /* get high res monotonic timestamp */
+ put_unaligned(timespec_to_ns(&ts), (__u64 *)&ev->timestamp_ns);
+@@ -220,6 +232,7 @@ static void cn_proc_ack(int err, int rcvd_seq, int rcvd_ack)
+ memcpy(&msg->id, &cn_proc_event_id, sizeof(msg->id));
+ msg->ack = rcvd_ack + 1;
+ msg->len = sizeof(*ev);
++ msg->flags = 0; /* not used */
+ cn_netlink_send(msg, CN_IDX_PROC, GFP_KERNEL);
+ }
+
+diff --git a/drivers/connector/connector.c b/drivers/connector/connector.c
+index 537c29a..980412b 100644
+--- a/drivers/connector/connector.c
++++ b/drivers/connector/connector.c
+@@ -177,17 +177,18 @@ static int cn_call_callback(struct sk_buff *skb)
+ static void cn_rx_skb(struct sk_buff *__skb)
+ {
+ struct nlmsghdr *nlh;
+- int err;
+ struct sk_buff *skb;
++ int len, err;
+
+ skb = skb_get(__skb);
+
+ if (skb->len >= NLMSG_SPACE(0)) {
+ nlh = nlmsg_hdr(skb);
++ len = nlmsg_len(nlh);
+
+- if (nlh->nlmsg_len < sizeof(struct cn_msg) ||
++ if (len < (int)sizeof(struct cn_msg) ||
+ skb->len < nlh->nlmsg_len ||
+- nlh->nlmsg_len > CONNECTOR_MAX_MSG_SIZE) {
++ len > CONNECTOR_MAX_MSG_SIZE) {
+ kfree_skb(skb);
+ return;
+ }
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 11f8069..e7e28b5 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -58,6 +58,8 @@ static struct hid_report *hid_register_report(struct hid_device *device, unsigne
+ struct hid_report_enum *report_enum = device->report_enum + type;
+ struct hid_report *report;
+
++ if (id >= HID_MAX_IDS)
++ return NULL;
+ if (report_enum->report_id_hash[id])
+ return report_enum->report_id_hash[id];
+
+@@ -368,8 +370,10 @@ static int hid_parser_global(struct hid_parser *parser, struct hid_item *item)
+
+ case HID_GLOBAL_ITEM_TAG_REPORT_ID:
+ parser->global.report_id = item_udata(item);
+- if (parser->global.report_id == 0) {
+- dbg_hid("report_id 0 is invalid\n");
++ if (parser->global.report_id == 0 ||
++ parser->global.report_id >= HID_MAX_IDS) {
++ dbg_hid("report_id %u is invalid\n",
++ parser->global.report_id);
+ return -1;
+ }
+ return 0;
+@@ -545,7 +549,7 @@ static void hid_device_release(struct device *dev)
+ for (i = 0; i < HID_REPORT_TYPES; i++) {
+ struct hid_report_enum *report_enum = device->report_enum + i;
+
+- for (j = 0; j < 256; j++) {
++ for (j = 0; j < HID_MAX_IDS; j++) {
+ struct hid_report *report = report_enum->report_id_hash[j];
+ if (report)
+ hid_free_report(report);
+@@ -804,6 +808,64 @@ static __inline__ int search(__s32 *array, __s32 value, unsigned n)
+ return -1;
+ }
+
++static const char * const hid_report_names[] = {
++ "HID_INPUT_REPORT",
++ "HID_OUTPUT_REPORT",
++ "HID_FEATURE_REPORT",
++};
++/**
++ * hid_validate_values - validate existing device report's value indexes
++ *
++ * @device: hid device
++ * @type: which report type to examine
++ * @id: which report ID to examine (0 for first)
++ * @field_index: which report field to examine
++ * @report_counts: expected number of values
++ *
++ * Validate the number of values in a given field of a given report, after
++ * parsing.
++ */
++struct hid_report *hid_validate_values(struct hid_device *hid,
++ unsigned int type, unsigned int id,
++ unsigned int field_index,
++ unsigned int report_counts)
++{
++ struct hid_report *report;
++
++ if (type > HID_FEATURE_REPORT) {
++ dev_err(&hid->dev, "invalid HID report type %u\n", type);
++ return NULL;
++ }
++
++ if (id >= HID_MAX_IDS) {
++ dev_err(&hid->dev, "invalid HID report id %u\n", id);
++ return NULL;
++ }
++
++ /*
++ * Explicitly not using hid_get_report() here since it depends on
++ * ->numbered being checked, which may not always be the case when
++ * drivers go to access report values.
++ */
++ report = hid->report_enum[type].report_id_hash[id];
++ if (!report) {
++ dev_err(&hid->dev, "missing %s %u\n", hid_report_names[type], id);
++ return NULL;
++ }
++ if (report->maxfield <= field_index) {
++ dev_err(&hid->dev, "not enough fields in %s %u\n",
++ hid_report_names[type], id);
++ return NULL;
++ }
++ if (report->field[field_index]->report_count < report_counts) {
++ dev_err(&hid->dev, "not enough values in %s %u field %u\n",
++ hid_report_names[type], id, field_index);
++ return NULL;
++ }
++ return report;
++}
++EXPORT_SYMBOL_GPL(hid_validate_values);
++
+ /**
+ * hid_match_report - check if driver's raw_event should be called
+ *
+@@ -979,7 +1041,12 @@ EXPORT_SYMBOL_GPL(hid_output_report);
+
+ int hid_set_field(struct hid_field *field, unsigned offset, __s32 value)
+ {
+- unsigned size = field->report_size;
++ unsigned size;
++
++ if (!field)
++ return -1;
++
++ size = field->report_size;
+
+ hid_dump_input(field->report->device, field->usage + offset, value);
+
+diff --git a/drivers/hid/hid-lg2ff.c b/drivers/hid/hid-lg2ff.c
+index 4e6dc6e..a260a8c 100644
+--- a/drivers/hid/hid-lg2ff.c
++++ b/drivers/hid/hid-lg2ff.c
+@@ -65,26 +65,13 @@ int lg2ff_init(struct hid_device *hid)
+ struct hid_report *report;
+ struct hid_input *hidinput = list_entry(hid->inputs.next,
+ struct hid_input, list);
+- struct list_head *report_list =
+- &hid->report_enum[HID_OUTPUT_REPORT].report_list;
+ struct input_dev *dev = hidinput->input;
+ int error;
+
+- if (list_empty(report_list)) {
+- dev_err(&hid->dev, "no output report found\n");
++ /* Check that the report looks ok */
++ report = hid_validate_values(hid, HID_OUTPUT_REPORT, 0, 0, 7);
++ if (!report)
+ return -ENODEV;
+- }
+-
+- report = list_entry(report_list->next, struct hid_report, list);
+-
+- if (report->maxfield < 1) {
+- dev_err(&hid->dev, "output report is empty\n");
+- return -ENODEV;
+- }
+- if (report->field[0]->report_count < 7) {
+- dev_err(&hid->dev, "not enough values in the field\n");
+- return -ENODEV;
+- }
+
+ lg2ff = kmalloc(sizeof(struct lg2ff_device), GFP_KERNEL);
+ if (!lg2ff)
+diff --git a/drivers/hid/hid-lgff.c b/drivers/hid/hid-lgff.c
+index 987abeb..df26abb 100644
+--- a/drivers/hid/hid-lgff.c
++++ b/drivers/hid/hid-lgff.c
+@@ -135,27 +135,14 @@ static void hid_lgff_set_autocenter(struct input_dev *dev, u16 magnitude)
+ int lgff_init(struct hid_device* hid)
+ {
+ struct hid_input *hidinput = list_entry(hid->inputs.next, struct hid_input, list);
+- struct list_head *report_list = &hid->report_enum[HID_OUTPUT_REPORT].report_list;
+ struct input_dev *dev = hidinput->input;
+- struct hid_report *report;
+- struct hid_field *field;
+ const signed short *ff_bits = ff_joystick;
+ int error;
+ int i;
+
+- /* Find the report to use */
+- if (list_empty(report_list)) {
+- err_hid("No output report found");
+- return -1;
+- }
+-
+ /* Check that the report looks ok */
+- report = list_entry(report_list->next, struct hid_report, list);
+- field = report->field[0];
+- if (!field) {
+- err_hid("NULL field");
+- return -1;
+- }
++ if (!hid_validate_values(hid, HID_OUTPUT_REPORT, 0, 0, 7))
++ return -ENODEV;
+
+ for (i = 0; i < ARRAY_SIZE(devices); i++) {
+ if (dev->id.vendor == devices[i].idVendor &&
+diff --git a/drivers/hid/hid-pl.c b/drivers/hid/hid-pl.c
+index c6d7dbc..8cdf7b8 100644
+--- a/drivers/hid/hid-pl.c
++++ b/drivers/hid/hid-pl.c
+@@ -128,8 +128,14 @@ static int plff_init(struct hid_device *hid)
+ strong = &report->field[0]->value[2];
+ weak = &report->field[0]->value[3];
+ debug("detected single-field device");
+- } else if (report->maxfield >= 4 && report->field[0]->maxusage == 1 &&
+- report->field[0]->usage[0].hid == (HID_UP_LED | 0x43)) {
++ } else if (report->field[0]->maxusage == 1 &&
++ report->field[0]->usage[0].hid ==
++ (HID_UP_LED | 0x43) &&
++ report->maxfield >= 4 &&
++ report->field[0]->report_count >= 1 &&
++ report->field[1]->report_count >= 1 &&
++ report->field[2]->report_count >= 1 &&
++ report->field[3]->report_count >= 1) {
+ report->field[0]->value[0] = 0x00;
+ report->field[1]->value[0] = 0x00;
+ strong = &report->field[2]->value[0];
+diff --git a/drivers/hid/hid-zpff.c b/drivers/hid/hid-zpff.c
+index a79f0d7..5617ea9 100644
+--- a/drivers/hid/hid-zpff.c
++++ b/drivers/hid/hid-zpff.c
+@@ -68,21 +68,13 @@ static int zpff_init(struct hid_device *hid)
+ struct hid_report *report;
+ struct hid_input *hidinput = list_entry(hid->inputs.next,
+ struct hid_input, list);
+- struct list_head *report_list =
+- &hid->report_enum[HID_OUTPUT_REPORT].report_list;
+ struct input_dev *dev = hidinput->input;
+- int error;
++ int i, error;
+
+- if (list_empty(report_list)) {
+- dev_err(&hid->dev, "no output report found\n");
+- return -ENODEV;
+- }
+-
+- report = list_entry(report_list->next, struct hid_report, list);
+-
+- if (report->maxfield < 4) {
+- dev_err(&hid->dev, "not enough fields in report\n");
+- return -ENODEV;
++ for (i = 0; i < 4; i++) {
++ report = hid_validate_values(hid, HID_OUTPUT_REPORT, 0, i, 1);
++ if (!report)
++ return -ENODEV;
+ }
+
+ zpff = kzalloc(sizeof(struct zpff_device), GFP_KERNEL);
+diff --git a/drivers/isdn/isdnloop/isdnloop.c b/drivers/isdn/isdnloop/isdnloop.c
+index 22446f7..4267d48 100644
+--- a/drivers/isdn/isdnloop/isdnloop.c
++++ b/drivers/isdn/isdnloop/isdnloop.c
+@@ -517,9 +517,9 @@ static isdnloop_stat isdnloop_cmd_table[] =
+ static void
+ isdnloop_fake_err(isdnloop_card * card)
+ {
+- char buf[60];
++ char buf[64];
+
+- sprintf(buf, "E%s", card->omsg);
++ snprintf(buf, sizeof(buf), "E%s", card->omsg);
+ isdnloop_fake(card, buf, -1);
+ isdnloop_fake(card, "NAK", -1);
+ }
+@@ -902,6 +902,8 @@ isdnloop_parse_cmd(isdnloop_card * card)
+ case 7:
+ /* 0x;EAZ */
+ p += 3;
++ if (strlen(p) >= sizeof(card->eazlist[0]))
++ break;
+ strcpy(card->eazlist[ch - 1], p);
+ break;
+ case 8:
+@@ -1069,6 +1071,12 @@ isdnloop_start(isdnloop_card * card, isdnloop_sdef * sdefp)
+ return -EBUSY;
+ if (copy_from_user((char *) &sdef, (char *) sdefp, sizeof(sdef)))
+ return -EFAULT;
++
++ for (i = 0; i < 3; i++) {
++ if (!memchr(sdef.num[i], 0, sizeof(sdef.num[i])))
++ return -EINVAL;
++ }
++
+ spin_lock_irqsave(&card->isdnloop_lock, flags);
+ switch (sdef.ptype) {
+ case ISDN_PTYPE_EURO:
+@@ -1082,8 +1090,10 @@ isdnloop_start(isdnloop_card * card, isdnloop_sdef * sdefp)
+ spin_unlock_irqrestore(&card->isdnloop_lock, flags);
+ return -ENOMEM;
+ }
+- for (i = 0; i < 3; i++)
+- strcpy(card->s0num[i], sdef.num[i]);
++ for (i = 0; i < 3; i++) {
++ strlcpy(card->s0num[i], sdef.num[i],
++ sizeof(card->s0num[0]));
++ }
+ break;
+ case ISDN_PTYPE_1TR6:
+ if (isdnloop_fake(card, "DRV1.04TC-1TR6-CAPI-CNS-BASIS-29.11.95",
+@@ -1096,7 +1106,7 @@ isdnloop_start(isdnloop_card * card, isdnloop_sdef * sdefp)
+ spin_unlock_irqrestore(&card->isdnloop_lock, flags);
+ return -ENOMEM;
+ }
+- strcpy(card->s0num[0], sdef.num[0]);
++ strlcpy(card->s0num[0], sdef.num[0], sizeof(card->s0num[0]));
+ card->s0num[1][0] = '\0';
+ card->s0num[2][0] = '\0';
+ break;
+@@ -1124,7 +1134,7 @@ isdnloop_command(isdn_ctrl * c, isdnloop_card * card)
+ {
+ ulong a;
+ int i;
+- char cbuf[60];
++ char cbuf[80];
+ isdn_ctrl cmd;
+ isdnloop_cdef cdef;
+
+@@ -1189,7 +1199,6 @@ isdnloop_command(isdn_ctrl * c, isdnloop_card * card)
+ break;
+ if ((c->arg & 255) < ISDNLOOP_BCH) {
+ char *p;
+- char dial[50];
+ char dcode[4];
+
+ a = c->arg;
+@@ -1201,10 +1210,10 @@ isdnloop_command(isdn_ctrl * c, isdnloop_card * card)
+ } else
+ /* Normal Dial */
+ strcpy(dcode, "CAL");
+- strcpy(dial, p);
+- sprintf(cbuf, "%02d;D%s_R%s,%02d,%02d,%s\n", (int) (a + 1),
+- dcode, dial, c->parm.setup.si1,
+- c->parm.setup.si2, c->parm.setup.eazmsn);
++ snprintf(cbuf, sizeof(cbuf),
++ "%02d;D%s_R%s,%02d,%02d,%s\n", (int) (a + 1),
++ dcode, p, c->parm.setup.si1,
++ c->parm.setup.si2, c->parm.setup.eazmsn);
+ i = isdnloop_writecmd(cbuf, strlen(cbuf), 0, card);
+ }
+ break;
+diff --git a/drivers/isdn/mISDN/socket.c b/drivers/isdn/mISDN/socket.c
+index feb0fa4..db69cb4 100644
+--- a/drivers/isdn/mISDN/socket.c
++++ b/drivers/isdn/mISDN/socket.c
+@@ -115,7 +115,6 @@ mISDN_sock_recvmsg(struct kiocb *iocb, struct socket *sock,
+ {
+ struct sk_buff *skb;
+ struct sock *sk = sock->sk;
+- struct sockaddr_mISDN *maddr;
+
+ int copied, err;
+
+@@ -133,9 +132,9 @@ mISDN_sock_recvmsg(struct kiocb *iocb, struct socket *sock,
+ if (!skb)
+ return err;
+
+- if (msg->msg_namelen >= sizeof(struct sockaddr_mISDN)) {
+- msg->msg_namelen = sizeof(struct sockaddr_mISDN);
+- maddr = (struct sockaddr_mISDN *)msg->msg_name;
++ if (msg->msg_name) {
++ struct sockaddr_mISDN *maddr = msg->msg_name;
++
+ maddr->family = AF_ISDN;
+ maddr->dev = _pms(sk)->dev->id;
+ if ((sk->sk_protocol == ISDN_P_LAPD_TE) ||
+@@ -148,11 +147,7 @@ mISDN_sock_recvmsg(struct kiocb *iocb, struct socket *sock,
+ maddr->sapi = _pms(sk)->ch.addr & 0xFF;
+ maddr->tei = (_pms(sk)->ch.addr >> 8) & 0xFF;
+ }
+- } else {
+- if (msg->msg_namelen)
+- printk(KERN_WARNING "%s: too small namelen %d\n",
+- __func__, msg->msg_namelen);
+- msg->msg_namelen = 0;
++ msg->msg_namelen = sizeof(*maddr);
+ }
+
+ copied = skb->len + MISDN_HEADER_LEN;
+diff --git a/drivers/md/dm-snap-persistent.c b/drivers/md/dm-snap-persistent.c
+index 0c74642..97c3f06 100644
+--- a/drivers/md/dm-snap-persistent.c
++++ b/drivers/md/dm-snap-persistent.c
+@@ -252,6 +252,14 @@ static chunk_t area_location(struct pstore *ps, chunk_t area)
+ return 1 + ((ps->exceptions_per_area + 1) * area);
+ }
+
++static void skip_metadata(struct pstore *ps)
++{
++ uint32_t stride = ps->exceptions_per_area + 1;
++ chunk_t next_free = ps->next_free;
++ if (sector_div(next_free, stride) == 1)
++ ps->next_free++;
++}
++
+ /*
+ * Read or write a metadata area. Remembering to skip the first
+ * chunk which holds the header.
+@@ -481,6 +489,8 @@ static int read_exceptions(struct pstore *ps,
+
+ ps->current_area--;
+
++ skip_metadata(ps);
++
+ return 0;
+ }
+
+@@ -587,8 +597,6 @@ static int persistent_prepare_exception(struct dm_exception_store *store,
+ struct dm_snap_exception *e)
+ {
+ struct pstore *ps = get_info(store);
+- uint32_t stride;
+- chunk_t next_free;
+ sector_t size = get_dev_size(store->cow->bdev);
+
+ /* Is there enough room ? */
+@@ -601,10 +609,8 @@ static int persistent_prepare_exception(struct dm_exception_store *store,
+ * Move onto the next free pending, making sure to take
+ * into account the location of the metadata chunks.
+ */
+- stride = (ps->exceptions_per_area + 1);
+- next_free = ++ps->next_free;
+- if (sector_div(next_free, stride) == 1)
+- ps->next_free++;
++ ps->next_free++;
++ skip_metadata(ps);
+
+ atomic_inc(&ps->pending_count);
+ return 0;
+diff --git a/drivers/net/arcnet/arcnet.c b/drivers/net/arcnet/arcnet.c
+index 75a5725..e29940d 100644
+--- a/drivers/net/arcnet/arcnet.c
++++ b/drivers/net/arcnet/arcnet.c
+@@ -1008,7 +1008,7 @@ static void arcnet_rx(struct net_device *dev, int bufnum)
+
+ soft = &pkt.soft.rfc1201;
+
+- lp->hw.copy_from_card(dev, bufnum, 0, &pkt, sizeof(ARC_HDR_SIZE));
++ lp->hw.copy_from_card(dev, bufnum, 0, &pkt, ARC_HDR_SIZE);
+ if (pkt.hard.offset[0]) {
+ ofs = pkt.hard.offset[0];
+ length = 256 - ofs;
+diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c
+index 05308e6..ec2bf8c 100644
+--- a/drivers/net/bonding/bond_3ad.c
++++ b/drivers/net/bonding/bond_3ad.c
+@@ -1846,8 +1846,6 @@ void bond_3ad_initiate_agg_selection(struct bonding *bond, int timeout)
+ BOND_AD_INFO(bond).agg_select_mode = bond->params.ad_select;
+ }
+
+-static u16 aggregator_identifier;
+-
+ /**
+ * bond_3ad_initialize - initialize a bond's 802.3ad parameters and structures
+ * @bond: bonding struct to work on
+@@ -1862,7 +1860,7 @@ void bond_3ad_initialize(struct bonding *bond, u16 tick_resolution, int lacp_fas
+ if (MAC_ADDRESS_COMPARE(&(BOND_AD_INFO(bond).system.sys_mac_addr),
+ bond->dev->dev_addr)) {
+
+- aggregator_identifier = 0;
++ BOND_AD_INFO(bond).aggregator_identifier = 0;
+
+ BOND_AD_INFO(bond).lacp_fast = lacp_fast;
+ BOND_AD_INFO(bond).system.sys_priority = 0xFFFF;
+@@ -1937,7 +1935,7 @@ int bond_3ad_bind_slave(struct slave *slave)
+ ad_initialize_agg(aggregator);
+
+ aggregator->aggregator_mac_address = *((struct mac_addr *)bond->dev->dev_addr);
+- aggregator->aggregator_identifier = (++aggregator_identifier);
++ aggregator->aggregator_identifier = ++BOND_AD_INFO(bond).aggregator_identifier;
+ aggregator->slave = slave;
+ aggregator->is_active = 0;
+ aggregator->num_of_ports = 0;
+diff --git a/drivers/net/bonding/bond_3ad.h b/drivers/net/bonding/bond_3ad.h
+index 2c46a154..f04f465 100644
+--- a/drivers/net/bonding/bond_3ad.h
++++ b/drivers/net/bonding/bond_3ad.h
+@@ -253,6 +253,7 @@ struct ad_system {
+ struct ad_bond_info {
+ struct ad_system system; /* 802.3ad system structure */
+ u32 agg_select_timer; // Timer to select aggregator after all adapter's hand shakes
++ u16 aggregator_identifier;
+ u32 agg_select_mode; // Mode of selection of active aggregator(bandwidth/count)
+ int lacp_fast; /* whether fast periodic tx should be
+ * requested
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 6ffbfb7..4f52101 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -1794,6 +1794,7 @@ int bond_release(struct net_device *bond_dev, struct net_device *slave_dev)
+ struct bonding *bond = netdev_priv(bond_dev);
+ struct slave *slave, *oldcurrent;
+ struct sockaddr addr;
++ int old_flags = bond_dev->flags;
+
+ /* slave is not a slave or master is not master of this slave */
+ if (!(slave_dev->flags & IFF_SLAVE) ||
+@@ -1929,12 +1930,18 @@ int bond_release(struct net_device *bond_dev, struct net_device *slave_dev)
+ * already taken care of above when we detached the slave
+ */
+ if (!USES_PRIMARY(bond->params.mode)) {
+- /* unset promiscuity level from slave */
+- if (bond_dev->flags & IFF_PROMISC)
++ /* unset promiscuity level from slave
++ * NOTE: The NETDEV_CHANGEADDR call above may change the value
++ * of the IFF_PROMISC flag in the bond_dev, but we need the
++ * value of that flag before that change, as that was the value
++ * when this slave was attached, so we cache at the start of the
++ * function and use it here. Same goes for ALLMULTI below
++ */
++ if (old_flags & IFF_PROMISC)
+ dev_set_promiscuity(slave_dev, -1);
+
+ /* unset allmulti level from slave */
+- if (bond_dev->flags & IFF_ALLMULTI)
++ if (old_flags & IFF_ALLMULTI)
+ dev_set_allmulti(slave_dev, -1);
+
+ /* flush master's mc_list from slave */
+diff --git a/drivers/net/bonding/bond_sysfs.c b/drivers/net/bonding/bond_sysfs.c
+index 8762a27..3666a9a 100644
+--- a/drivers/net/bonding/bond_sysfs.c
++++ b/drivers/net/bonding/bond_sysfs.c
+@@ -755,6 +755,8 @@ static ssize_t bonding_store_downdelay(struct device *d,
+ int new_value, ret = count;
+ struct bonding *bond = to_bond(d);
+
++ if (!rtnl_trylock())
++ return restart_syscall();
+ if (!(bond->params.miimon)) {
+ pr_err(DRV_NAME
+ ": %s: Unable to set down delay as MII monitoring is disabled\n",
+@@ -795,6 +797,7 @@ static ssize_t bonding_store_downdelay(struct device *d,
+ }
+
+ out:
++ rtnl_unlock();
+ return ret;
+ }
+ static DEVICE_ATTR(downdelay, S_IRUGO | S_IWUSR,
+@@ -817,6 +820,8 @@ static ssize_t bonding_store_updelay(struct device *d,
+ int new_value, ret = count;
+ struct bonding *bond = to_bond(d);
+
++ if (!rtnl_trylock())
++ return restart_syscall();
+ if (!(bond->params.miimon)) {
+ pr_err(DRV_NAME
+ ": %s: Unable to set up delay as MII monitoring is disabled\n",
+@@ -856,6 +861,7 @@ static ssize_t bonding_store_updelay(struct device *d,
+ }
+
+ out:
++ rtnl_unlock();
+ return ret;
+ }
+ static DEVICE_ATTR(updelay, S_IRUGO | S_IWUSR,
+diff --git a/drivers/net/can/dev.c b/drivers/net/can/dev.c
+index 2868fe8..ea2749f9 100644
+--- a/drivers/net/can/dev.c
++++ b/drivers/net/can/dev.c
+@@ -595,12 +595,12 @@ static size_t can_get_size(const struct net_device *dev)
+ size_t size;
+
+ size = nla_total_size(sizeof(u32)); /* IFLA_CAN_STATE */
+- size += sizeof(struct can_ctrlmode); /* IFLA_CAN_CTRLMODE */
++ size += nla_total_size(sizeof(struct can_ctrlmode)); /* IFLA_CAN_CTRLMODE */
+ size += nla_total_size(sizeof(u32)); /* IFLA_CAN_RESTART_MS */
+- size += sizeof(struct can_bittiming); /* IFLA_CAN_BITTIMING */
+- size += sizeof(struct can_clock); /* IFLA_CAN_CLOCK */
++ size += nla_total_size(sizeof(struct can_bittiming)); /* IFLA_CAN_BITTIMING */
++ size += nla_total_size(sizeof(struct can_clock)); /* IFLA_CAN_CLOCK */
+ if (priv->bittiming_const) /* IFLA_CAN_BITTIMING_CONST */
+- size += sizeof(struct can_bittiming_const);
++ size += nla_total_size(sizeof(struct can_bittiming_const));
+
+ return size;
+ }
+diff --git a/drivers/net/davinci_emac.c b/drivers/net/davinci_emac.c
+index e347831..eafd1e4 100644
+--- a/drivers/net/davinci_emac.c
++++ b/drivers/net/davinci_emac.c
+@@ -960,7 +960,7 @@ static void emac_dev_mcast_set(struct net_device *ndev)
+ mbp_enable = (mbp_enable | EMAC_MBP_RXMCAST);
+ emac_add_mcast(priv, EMAC_ALL_MULTI_SET, NULL);
+ }
+- if (ndev->mc_count > 0) {
++ else if (ndev->mc_count > 0) {
+ struct dev_mc_list *mc_ptr;
+ mbp_enable = (mbp_enable | EMAC_MBP_RXMCAST);
+ emac_add_mcast(priv, EMAC_ALL_MULTI_CLR, NULL);
+diff --git a/drivers/net/dummy.c b/drivers/net/dummy.c
+index 37dcfdc..9d9de18 100644
+--- a/drivers/net/dummy.c
++++ b/drivers/net/dummy.c
+@@ -137,11 +137,15 @@ static int __init dummy_init_module(void)
+
+ rtnl_lock();
+ err = __rtnl_link_register(&dummy_link_ops);
++ if (err < 0)
++ goto out;
+
+ for (i = 0; i < numdummies && !err; i++)
+ err = dummy_init_one();
+ if (err < 0)
+ __rtnl_link_unregister(&dummy_link_ops);
++
++out:
+ rtnl_unlock();
+
+ return err;
+diff --git a/drivers/net/gianfar.c b/drivers/net/gianfar.c
+index 934a28f..8aa2cf6 100644
+--- a/drivers/net/gianfar.c
++++ b/drivers/net/gianfar.c
+@@ -365,7 +365,7 @@ static int gfar_probe(struct of_device *ofdev,
+ priv->vlgrp = NULL;
+
+ if (priv->device_flags & FSL_GIANFAR_DEV_HAS_VLAN)
+- dev->features |= NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX;
++ dev->features |= NETIF_F_HW_VLAN_RX;
+
+ if (priv->device_flags & FSL_GIANFAR_DEV_HAS_EXTENDED_HASH) {
+ priv->extended_hash = 1;
+@@ -1451,12 +1451,6 @@ static void gfar_vlan_rx_register(struct net_device *dev,
+ priv->vlgrp = grp;
+
+ if (grp) {
+- /* Enable VLAN tag insertion */
+- tempval = gfar_read(&priv->regs->tctrl);
+- tempval |= TCTRL_VLINS;
+-
+- gfar_write(&priv->regs->tctrl, tempval);
+-
+ /* Enable VLAN tag extraction */
+ tempval = gfar_read(&priv->regs->rctrl);
+ tempval |= (RCTRL_VLEX | RCTRL_PRSDEP_INIT);
+diff --git a/drivers/net/hamradio/hdlcdrv.c b/drivers/net/hamradio/hdlcdrv.c
+index 91c5790..c1b265d 100644
+--- a/drivers/net/hamradio/hdlcdrv.c
++++ b/drivers/net/hamradio/hdlcdrv.c
+@@ -572,6 +572,8 @@ static int hdlcdrv_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+ case HDLCDRVCTL_CALIBRATE:
+ if(!capable(CAP_SYS_RAWIO))
+ return -EPERM;
++ if (bi.data.calibrate > INT_MAX / s->par.bitrate)
++ return -EINVAL;
+ s->hdlctx.calibrate = bi.data.calibrate * s->par.bitrate / 16;
+ return 0;
+
+diff --git a/drivers/net/hamradio/yam.c b/drivers/net/hamradio/yam.c
+index 694132e..1a1002d 100644
+--- a/drivers/net/hamradio/yam.c
++++ b/drivers/net/hamradio/yam.c
+@@ -1060,6 +1060,7 @@ static int yam_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+ break;
+
+ case SIOCYAMGCFG:
++ memset(&yi, 0, sizeof(yi));
+ yi.cfg.mask = 0xffffffff;
+ yi.cfg.iobase = yp->iobase;
+ yi.cfg.irq = yp->irq;
+diff --git a/drivers/net/ifb.c b/drivers/net/ifb.c
+index 030913f..509c6f5 100644
+--- a/drivers/net/ifb.c
++++ b/drivers/net/ifb.c
+@@ -33,6 +33,7 @@
+ #include <linux/etherdevice.h>
+ #include <linux/init.h>
+ #include <linux/moduleparam.h>
++#include <linux/sched.h>
+ #include <net/pkt_sched.h>
+ #include <net/net_namespace.h>
+
+@@ -268,11 +269,17 @@ static int __init ifb_init_module(void)
+
+ rtnl_lock();
+ err = __rtnl_link_register(&ifb_link_ops);
++ if (err < 0)
++ goto out;
+
+- for (i = 0; i < numifbs && !err; i++)
++ for (i = 0; i < numifbs && !err; i++) {
+ err = ifb_init_one(i);
++ cond_resched();
++ }
+ if (err)
+ __rtnl_link_unregister(&ifb_link_ops);
++
++out:
+ rtnl_unlock();
+
+ return err;
+diff --git a/drivers/net/ll_temac_main.c b/drivers/net/ll_temac_main.c
+index f2a197f..d2516dd 100644
+--- a/drivers/net/ll_temac_main.c
++++ b/drivers/net/ll_temac_main.c
+@@ -190,6 +190,12 @@ static int temac_dma_bd_init(struct net_device *ndev)
+ lp->rx_bd_p + (sizeof(*lp->rx_bd_v) * (RX_BD_NUM - 1)));
+ temac_dma_out32(lp, TX_CURDESC_PTR, lp->tx_bd_p);
+
++ /* Init descriptor indexes */
++ lp->tx_bd_ci = 0;
++ lp->tx_bd_next = 0;
++ lp->tx_bd_tail = 0;
++ lp->rx_bd_ci = 0;
++
+ return 0;
+ }
+
+diff --git a/drivers/net/pppoe.c b/drivers/net/pppoe.c
+index 2559991..343fd1e 100644
+--- a/drivers/net/pppoe.c
++++ b/drivers/net/pppoe.c
+@@ -992,8 +992,6 @@ static int pppoe_recvmsg(struct kiocb *iocb, struct socket *sock,
+ if (error < 0)
+ goto end;
+
+- m->msg_namelen = 0;
+-
+ if (skb) {
+ total_len = min_t(size_t, total_len, skb->len);
+ error = skb_copy_datagram_iovec(skb, 0, m->msg_iov, total_len);
+diff --git a/drivers/net/pppol2tp.c b/drivers/net/pppol2tp.c
+index 9235901..4cdc1cf 100644
+--- a/drivers/net/pppol2tp.c
++++ b/drivers/net/pppol2tp.c
+@@ -829,8 +829,6 @@ static int pppol2tp_recvmsg(struct kiocb *iocb, struct socket *sock,
+ if (sk->sk_state & PPPOX_BOUND)
+ goto end;
+
+- msg->msg_namelen = 0;
+-
+ err = 0;
+ skb = skb_recv_datagram(sk, flags & ~MSG_DONTWAIT,
+ flags & MSG_DONTWAIT, &err);
+diff --git a/drivers/net/sunvnet.c b/drivers/net/sunvnet.c
+index bc74db0..b6d0348 100644
+--- a/drivers/net/sunvnet.c
++++ b/drivers/net/sunvnet.c
+@@ -1260,6 +1260,8 @@ static int vnet_port_remove(struct vio_dev *vdev)
+ dev_set_drvdata(&vdev->dev, NULL);
+
+ kfree(port);
++
++ unregister_netdev(vp->dev);
+ }
+ return 0;
+ }
+diff --git a/drivers/net/tg3.c b/drivers/net/tg3.c
+index 89aa69c..17e8abe 100644
+diff --git a/drivers/net/tg3.h b/drivers/net/tg3.h
+index 529f55a..593f8c6 100644
+diff --git a/drivers/net/usb/dm9601.c b/drivers/net/usb/dm9601.c
+index 9a6eede..498681a 100644
+--- a/drivers/net/usb/dm9601.c
++++ b/drivers/net/usb/dm9601.c
+@@ -382,7 +382,7 @@ static void dm9601_set_multicast(struct net_device *net)
+ if (net->flags & IFF_PROMISC) {
+ rx_ctl |= 0x02;
+ } else if (net->flags & IFF_ALLMULTI || net->mc_count > DM_MAX_MCAST) {
+- rx_ctl |= 0x04;
++ rx_ctl |= 0x08;
+ } else if (net->mc_count) {
+ struct dev_mc_list *mc_list = net->mc_list;
+ int i;
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index bf6d850..97a56f0 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -904,7 +904,8 @@ static int virtnet_probe(struct virtio_device *vdev)
+ /* If we can receive ANY GSO packets, we must allocate large ones. */
+ if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO4)
+ || virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO6)
+- || virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_ECN))
++ || virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_ECN)
++ || virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_UFO))
+ vi->big_packets = true;
+
+ if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF))
+diff --git a/drivers/net/wan/farsync.c b/drivers/net/wan/farsync.c
+index beda387..433bf99 100644
+--- a/drivers/net/wan/farsync.c
++++ b/drivers/net/wan/farsync.c
+@@ -1971,6 +1971,7 @@ fst_get_iface(struct fst_card_info *card, struct fst_port_info *port,
+ }
+
+ i = port->index;
++ memset(&sync, 0, sizeof(sync));
+ sync.clock_rate = FST_RDL(card, portConfig[i].lineSpeed);
+ /* Lucky card and linux use same encoding here */
+ sync.clock_type = FST_RDB(card, portConfig[i].internalClock) ==
+diff --git a/drivers/net/wan/wanxl.c b/drivers/net/wan/wanxl.c
+index daee8a0..b52b378 100644
+--- a/drivers/net/wan/wanxl.c
++++ b/drivers/net/wan/wanxl.c
+@@ -354,6 +354,7 @@ static int wanxl_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+ ifr->ifr_settings.size = size; /* data size wanted */
+ return -ENOBUFS;
+ }
++ memset(&line, 0, sizeof(line));
+ line.clock_type = get_status(port)->clocking;
+ line.clock_rate = 0;
+ line.loopback = 0;
+diff --git a/drivers/net/wireless/b43/main.c b/drivers/net/wireless/b43/main.c
+index 94dae56..3cf2472 100644
+--- a/drivers/net/wireless/b43/main.c
++++ b/drivers/net/wireless/b43/main.c
+@@ -2257,7 +2257,7 @@ static int b43_request_firmware(struct b43_wldev *dev)
+ for (i = 0; i < B43_NR_FWTYPES; i++) {
+ errmsg = ctx->errors[i];
+ if (strlen(errmsg))
+- b43err(dev->wl, errmsg);
++ b43err(dev->wl, "%s", errmsg);
+ }
+ b43_print_fw_helptext(dev->wl, 1);
+ err = -ENOENT;
+diff --git a/drivers/net/wireless/libertas/debugfs.c b/drivers/net/wireless/libertas/debugfs.c
+index 893a55c..89532a6 100644
+--- a/drivers/net/wireless/libertas/debugfs.c
++++ b/drivers/net/wireless/libertas/debugfs.c
+@@ -925,7 +925,10 @@ static ssize_t lbs_debugfs_write(struct file *f, const char __user *buf,
+ char *p2;
+ struct debug_data *d = (struct debug_data *)f->private_data;
+
+- pdata = kmalloc(cnt, GFP_KERNEL);
++ if (cnt == 0)
++ return 0;
++
++ pdata = kmalloc(cnt + 1, GFP_KERNEL);
+ if (pdata == NULL)
+ return 0;
+
+@@ -934,6 +937,7 @@ static ssize_t lbs_debugfs_write(struct file *f, const char __user *buf,
+ kfree(pdata);
+ return 0;
+ }
++ pdata[cnt] = '\0';
+
+ p0 = pdata;
+ for (i = 0; i < num_of_items; i++) {
+diff --git a/drivers/pci/intel-iommu.c b/drivers/pci/intel-iommu.c
+index 5b680df..c1a7b01 100644
+--- a/drivers/pci/intel-iommu.c
++++ b/drivers/pci/intel-iommu.c
+@@ -1434,6 +1434,10 @@ static void domain_exit(struct dmar_domain *domain)
+ if (!domain)
+ return;
+
++ /* Flush any lazy unmaps that may reference this domain */
++ if (!intel_iommu_strict)
++ flush_unmaps_timeout(0);
++
+ domain_remove_dev_info(domain);
+ /* destroy iovas */
+ put_iova_domain(&domain->iovad);
+diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
+index c4a42d9..29afd6c 100644
+--- a/drivers/s390/net/qeth_core_main.c
++++ b/drivers/s390/net/qeth_core_main.c
+@@ -3557,7 +3557,7 @@ int qeth_snmp_command(struct qeth_card *card, char __user *udata)
+ struct qeth_cmd_buffer *iob;
+ struct qeth_ipa_cmd *cmd;
+ struct qeth_snmp_ureq *ureq;
+- int req_len;
++ unsigned int req_len;
+ struct qeth_arp_query_info qinfo = {0, };
+ int rc = 0;
+
+@@ -3573,6 +3573,10 @@ int qeth_snmp_command(struct qeth_card *card, char __user *udata)
+ /* skip 4 bytes (data_len struct member) to get req_len */
+ if (copy_from_user(&req_len, udata + sizeof(int), sizeof(int)))
+ return -EFAULT;
++ if (req_len > (QETH_BUFSIZE - IPA_PDU_HEADER_SIZE -
++ sizeof(struct qeth_ipacmd_hdr) -
++ sizeof(struct qeth_ipacmd_setadpparms_hdr)))
++ return -EINVAL;
+ ureq = kmalloc(req_len+sizeof(struct qeth_snmp_ureq_hdr), GFP_KERNEL);
+ if (!ureq) {
+ QETH_DBF_TEXT(TRACE, 2, "snmpnome");
+diff --git a/drivers/scsi/aacraid/commctrl.c b/drivers/scsi/aacraid/commctrl.c
+index a5b8e7b..c895174 100644
+--- a/drivers/scsi/aacraid/commctrl.c
++++ b/drivers/scsi/aacraid/commctrl.c
+@@ -507,7 +507,8 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
+ goto cleanup;
+ }
+
+- if (fibsize > (dev->max_fib_size - sizeof(struct aac_fibhdr))) {
++ if ((fibsize < (sizeof(struct user_aac_srb) - sizeof(struct user_sgentry))) ||
++ (fibsize > (dev->max_fib_size - sizeof(struct aac_fibhdr)))) {
+ rcode = -EINVAL;
+ goto cleanup;
+ }
+diff --git a/drivers/scsi/aacraid/linit.c b/drivers/scsi/aacraid/linit.c
+index 9b97c3e..387872c 100644
+--- a/drivers/scsi/aacraid/linit.c
++++ b/drivers/scsi/aacraid/linit.c
+@@ -754,6 +754,8 @@ static long aac_compat_do_ioctl(struct aac_dev *dev, unsigned cmd, unsigned long
+ static int aac_compat_ioctl(struct scsi_device *sdev, int cmd, void __user *arg)
+ {
+ struct aac_dev *dev = (struct aac_dev *)sdev->host->hostdata;
++ if (!capable(CAP_SYS_RAWIO))
++ return -EPERM;
+ return aac_compat_do_ioctl(dev, cmd, (unsigned long)arg);
+ }
+
+diff --git a/drivers/staging/comedi/drivers/ni_65xx.c b/drivers/staging/comedi/drivers/ni_65xx.c
+index bbf75eb..bb23291 100644
+--- a/drivers/staging/comedi/drivers/ni_65xx.c
++++ b/drivers/staging/comedi/drivers/ni_65xx.c
+@@ -410,28 +410,25 @@ static int ni_65xx_dio_insn_bits(struct comedi_device *dev,
+ struct comedi_subdevice *s,
+ struct comedi_insn *insn, unsigned int *data)
+ {
+- unsigned base_bitfield_channel;
+- const unsigned max_ports_per_bitfield = 5;
++ int base_bitfield_channel;
+ unsigned read_bits = 0;
+- unsigned j;
++ int last_port_offset = ni_65xx_port_by_channel(s->n_chan - 1);
++ int port_offset;
++
+ if (insn->n != 2)
+ return -EINVAL;
+ base_bitfield_channel = CR_CHAN(insn->chanspec);
+- for (j = 0; j < max_ports_per_bitfield; ++j) {
+- const unsigned port_offset = ni_65xx_port_by_channel(base_bitfield_channel) + j;
+- const unsigned port =
+- sprivate(s)->base_port + port_offset;
+- unsigned base_port_channel;
++ for (port_offset = ni_65xx_port_by_channel(base_bitfield_channel);
++ port_offset <= last_port_offset; port_offset++) {
++ unsigned port = sprivate(s)->base_port + port_offset;
++ int base_port_channel = port_offset * ni_65xx_channels_per_port;
+ unsigned port_mask, port_data, port_read_bits;
+- int bitshift;
+- if (port >= ni_65xx_total_num_ports(board(dev)))
++ int bitshift = base_port_channel - base_bitfield_channel;
++
++ if (bitshift >= 32)
+ break;
+- base_port_channel = port_offset * ni_65xx_channels_per_port;
+ port_mask = data[0];
+ port_data = data[1];
+- bitshift = base_port_channel - base_bitfield_channel;
+- if (bitshift >= 32 || bitshift <= -32)
+- break;
+ if (bitshift > 0) {
+ port_mask >>= bitshift;
+ port_data >>= bitshift;
+diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c
+index e941367..e3804d3 100644
+--- a/drivers/uio/uio.c
++++ b/drivers/uio/uio.c
+@@ -669,16 +669,30 @@ static int uio_mmap_physical(struct vm_area_struct *vma)
+ {
+ struct uio_device *idev = vma->vm_private_data;
+ int mi = uio_find_mem_index(vma);
++ struct uio_mem *mem;
+ if (mi < 0)
+ return -EINVAL;
++ mem = idev->info->mem + mi;
++
++ if (vma->vm_end - vma->vm_start > mem->size)
++ return -EINVAL;
+
+ vma->vm_flags |= VM_IO | VM_RESERVED;
+
+ vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+
++ /*
++ * We cannot use the vm_iomap_memory() helper here,
++ * because vma->vm_pgoff is the map index we looked
++ * up above in uio_find_mem_index(), rather than an
++ * actual page offset into the mmap.
++ *
++ * So we just do the physical mmap without a page
++ * offset.
++ */
+ return remap_pfn_range(vma,
+ vma->vm_start,
+- idev->info->mem[mi].addr >> PAGE_SHIFT,
++ mem->addr >> PAGE_SHIFT,
+ vma->vm_end - vma->vm_start,
+ vma->vm_page_prot);
+ }
+diff --git a/drivers/video/au1100fb.c b/drivers/video/au1100fb.c
+index a699aab..745e5b3 100644
+--- a/drivers/video/au1100fb.c
++++ b/drivers/video/au1100fb.c
+@@ -392,39 +392,15 @@ void au1100fb_fb_rotate(struct fb_info *fbi, int angle)
+ int au1100fb_fb_mmap(struct fb_info *fbi, struct vm_area_struct *vma)
+ {
+ struct au1100fb_device *fbdev;
+- unsigned int len;
+- unsigned long start=0, off;
+
+ fbdev = to_au1100fb_device(fbi);
+
+- if (vma->vm_pgoff > (~0UL >> PAGE_SHIFT)) {
+- return -EINVAL;
+- }
+-
+- start = fbdev->fb_phys & PAGE_MASK;
+- len = PAGE_ALIGN((start & ~PAGE_MASK) + fbdev->fb_len);
+-
+- off = vma->vm_pgoff << PAGE_SHIFT;
+-
+- if ((vma->vm_end - vma->vm_start + off) > len) {
+- return -EINVAL;
+- }
+-
+- off += start;
+- vma->vm_pgoff = off >> PAGE_SHIFT;
+-
+ vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+ pgprot_val(vma->vm_page_prot) |= (6 << 9); //CCA=6
+
+ vma->vm_flags |= VM_IO;
+
+- if (io_remap_pfn_range(vma, vma->vm_start, off >> PAGE_SHIFT,
+- vma->vm_end - vma->vm_start,
+- vma->vm_page_prot)) {
+- return -EAGAIN;
+- }
+-
+- return 0;
++ return vm_iomap_memory(vma, fbdev->fb_phys, fbdev->fb_len);
+ }
+
+ /* fb_cursor
+diff --git a/drivers/video/au1200fb.c b/drivers/video/au1200fb.c
+index 0d96f1d..5d6e509 100644
+--- a/drivers/video/au1200fb.c
++++ b/drivers/video/au1200fb.c
+@@ -1241,42 +1241,18 @@ static int au1200fb_fb_blank(int blank_mode, struct fb_info *fbi)
+ * method mainly to allow the use of the TLB streaming flag (CCA=6)
+ */
+ static int au1200fb_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
+-
+ {
+- unsigned int len;
+- unsigned long start=0, off;
+ struct au1200fb_device *fbdev = (struct au1200fb_device *) info;
+
+ #ifdef CONFIG_PM
+ au1xxx_pm_access(LCD_pm_dev);
+ #endif
+-
+- if (vma->vm_pgoff > (~0UL >> PAGE_SHIFT)) {
+- return -EINVAL;
+- }
+-
+- start = fbdev->fb_phys & PAGE_MASK;
+- len = PAGE_ALIGN((start & ~PAGE_MASK) + fbdev->fb_len);
+-
+- off = vma->vm_pgoff << PAGE_SHIFT;
+-
+- if ((vma->vm_end - vma->vm_start + off) > len) {
+- return -EINVAL;
+- }
+-
+- off += start;
+- vma->vm_pgoff = off >> PAGE_SHIFT;
+-
+ vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+ pgprot_val(vma->vm_page_prot) |= _CACHE_MASK; /* CCA=7 */
+
+ vma->vm_flags |= VM_IO;
+
+- return io_remap_pfn_range(vma, vma->vm_start, off >> PAGE_SHIFT,
+- vma->vm_end - vma->vm_start,
+- vma->vm_page_prot);
+-
+- return 0;
++ return vm_iomap_memory(vma, fbdev->fb_phys, fbdev->fb_len);
+ }
+
+ static void set_global(u_int cmd, struct au1200_lcd_global_regs_t *pdata)
+diff --git a/fs/exec.c b/fs/exec.c
+index feb2435..c32ae34 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -1793,6 +1793,12 @@ void set_dumpable(struct mm_struct *mm, int value)
+ }
+ }
+
++/*
++ * This returns the actual value of the suid_dumpable flag. For things
++ * that are using this for checking for privilege transitions, it must
++ * test against SUID_DUMP_USER rather than treating it as a boolean
++ * value.
++ */
+ int get_dumpable(struct mm_struct *mm)
+ {
+ int ret;
+diff --git a/fs/partitions/check.c b/fs/partitions/check.c
+index 7b685e1..aa90d88 100644
+--- a/fs/partitions/check.c
++++ b/fs/partitions/check.c
+@@ -476,7 +476,7 @@ void register_disk(struct gendisk *disk)
+
+ ddev->parent = disk->driverfs_dev;
+
+- dev_set_name(ddev, disk->disk_name);
++ dev_set_name(ddev, "%s", disk->disk_name);
+
+ /* delay uevents, until we scanned partition table */
+ dev_set_uevent_suppress(ddev, 1);
+diff --git a/fs/xfs/linux-2.6/xfs_ioctl.c b/fs/xfs/linux-2.6/xfs_ioctl.c
+index 942362f..5663351 100644
+--- a/fs/xfs/linux-2.6/xfs_ioctl.c
++++ b/fs/xfs/linux-2.6/xfs_ioctl.c
+@@ -410,7 +410,8 @@ xfs_attrlist_by_handle(
+ return -XFS_ERROR(EPERM);
+ if (copy_from_user(&al_hreq, arg, sizeof(xfs_fsop_attrlist_handlereq_t)))
+ return -XFS_ERROR(EFAULT);
+- if (al_hreq.buflen > XATTR_LIST_MAX)
++ if (al_hreq.buflen < sizeof(struct attrlist) ||
++ al_hreq.buflen > XATTR_LIST_MAX)
+ return -XFS_ERROR(EINVAL);
+
+ /*
+diff --git a/fs/xfs/linux-2.6/xfs_ioctl32.c b/fs/xfs/linux-2.6/xfs_ioctl32.c
+index bad485a..e671047 100644
+--- a/fs/xfs/linux-2.6/xfs_ioctl32.c
++++ b/fs/xfs/linux-2.6/xfs_ioctl32.c
+@@ -361,7 +361,8 @@ xfs_compat_attrlist_by_handle(
+ if (copy_from_user(&al_hreq, arg,
+ sizeof(compat_xfs_fsop_attrlist_handlereq_t)))
+ return -XFS_ERROR(EFAULT);
+- if (al_hreq.buflen > XATTR_LIST_MAX)
++ if (al_hreq.buflen < sizeof(struct attrlist) ||
++ al_hreq.buflen > XATTR_LIST_MAX)
+ return -XFS_ERROR(EINVAL);
+
+ /*
+diff --git a/include/linux/binfmts.h b/include/linux/binfmts.h
+index 9ffffec..8eab628 100644
+--- a/include/linux/binfmts.h
++++ b/include/linux/binfmts.h
+@@ -107,9 +107,6 @@ extern int flush_old_exec(struct linux_binprm * bprm);
+ extern void setup_new_exec(struct linux_binprm * bprm);
+
+ extern int suid_dumpable;
+-#define SUID_DUMP_DISABLE 0 /* No setuid dumping */
+-#define SUID_DUMP_USER 1 /* Dump as user of process */
+-#define SUID_DUMP_ROOT 2 /* Dump as root */
+
+ /* Stack area protections */
+ #define EXSTACK_DEFAULT 0 /* Whatever the arch defaults to */
+diff --git a/include/linux/hid.h b/include/linux/hid.h
+index 8709365..e5db8e5 100644
+--- a/include/linux/hid.h
++++ b/include/linux/hid.h
+@@ -410,10 +410,12 @@ struct hid_report {
+ struct hid_device *device; /* associated device */
+ };
+
++#define HID_MAX_IDS 256
++
+ struct hid_report_enum {
+ unsigned numbered;
+ struct list_head report_list;
+- struct hid_report *report_id_hash[256];
++ struct hid_report *report_id_hash[HID_MAX_IDS];
+ };
+
+ #define HID_REPORT_TYPES 3
+@@ -691,6 +693,10 @@ int hidinput_find_field(struct hid_device *hid, unsigned int type, unsigned int
+ void hid_output_report(struct hid_report *report, __u8 *data);
+ struct hid_device *hid_allocate_device(void);
+ int hid_parse_report(struct hid_device *hid, __u8 *start, unsigned size);
++struct hid_report *hid_validate_values(struct hid_device *hid,
++ unsigned int type, unsigned int id,
++ unsigned int field_index,
++ unsigned int report_counts);
+ int hid_check_keys_pressed(struct hid_device *hid);
+ int hid_connect(struct hid_device *hid, unsigned int connect_mask);
+ void hid_disconnect(struct hid_device *hid);
+diff --git a/include/linux/icmpv6.h b/include/linux/icmpv6.h
+index c0d8357..2e3d33c 100644
+--- a/include/linux/icmpv6.h
++++ b/include/linux/icmpv6.h
+@@ -123,6 +123,8 @@ static inline struct icmp6hdr *icmp6_hdr(const struct sk_buff *skb)
+ #define ICMPV6_NOT_NEIGHBOUR 2
+ #define ICMPV6_ADDR_UNREACH 3
+ #define ICMPV6_PORT_UNREACH 4
++#define ICMPV6_POLICY_FAIL 5
++#define ICMPV6_REJECT_ROUTE 6
+
+ /*
+ * Codes for Time Exceeded
+diff --git a/include/linux/if_pppox.h b/include/linux/if_pppox.h
+index 90b5fae..1750054 100644
+--- a/include/linux/if_pppox.h
++++ b/include/linux/if_pppox.h
+@@ -108,11 +108,11 @@ struct pppoe_tag {
+
+ struct pppoe_hdr {
+ #if defined(__LITTLE_ENDIAN_BITFIELD)
+- __u8 ver : 4;
+ __u8 type : 4;
++ __u8 ver : 4;
+ #elif defined(__BIG_ENDIAN_BITFIELD)
+- __u8 type : 4;
+ __u8 ver : 4;
++ __u8 type : 4;
+ #else
+ #error "Please fix <asm/byteorder.h>"
+ #endif
+diff --git a/include/linux/ipv6.h b/include/linux/ipv6.h
+index c662efa..5bf3324 100644
+--- a/include/linux/ipv6.h
++++ b/include/linux/ipv6.h
+@@ -248,6 +248,7 @@ struct inet6_skb_parm {
+
+ #define IP6SKB_XFRM_TRANSFORMED 1
+ #define IP6SKB_FORWARDED 2
++#define IP6SKB_FRAGMENTED 16
+ };
+
+ #define IP6CB(skb) ((struct inet6_skb_parm*)((skb)->cb))
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index 11e5be6..5ef50c1 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -1243,6 +1243,8 @@ int vm_insert_pfn(struct vm_area_struct *vma, unsigned long addr,
+ unsigned long pfn);
+ int vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
+ unsigned long pfn);
++int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len);
++
+
+ struct page *follow_page(struct vm_area_struct *, unsigned long address,
+ unsigned int foll_flags);
+diff --git a/include/linux/net.h b/include/linux/net.h
+index 529a093..e40cbcc 100644
+--- a/include/linux/net.h
++++ b/include/linux/net.h
+@@ -187,6 +187,14 @@ struct proto_ops {
+ int optname, char __user *optval, int __user *optlen);
+ int (*sendmsg) (struct kiocb *iocb, struct socket *sock,
+ struct msghdr *m, size_t total_len);
++ /* Notes for implementing recvmsg:
++ * ===============================
++ * msg->msg_namelen should get updated by the recvmsg handlers
++ * iff msg_name != NULL. It is by default 0 to prevent
++ * returning uninitialized memory to user space. The recvfrom
++ * handlers can assume that msg.msg_name is either NULL or has
++ * a minimum size of sizeof(struct sockaddr_storage).
++ */
+ int (*recvmsg) (struct kiocb *iocb, struct socket *sock,
+ struct msghdr *m, size_t total_len,
+ int flags);
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 73c3b9b..56e1771 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -442,6 +442,10 @@ static inline unsigned long get_mm_hiwater_vm(struct mm_struct *mm)
+ extern void set_dumpable(struct mm_struct *mm, int value);
+ extern int get_dumpable(struct mm_struct *mm);
+
++#define SUID_DUMP_DISABLE 0 /* No setuid dumping */
++#define SUID_DUMP_USER 1 /* Dump as user of process */
++#define SUID_DUMP_ROOT 2 /* Dump as root */
++
+ /* mm flags */
+ /* dumpable bits */
+ #define MMF_DUMPABLE 0 /* core dump is permitted */
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index 4e647bb..ae77862 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -641,6 +641,16 @@ static inline int skb_cloned(const struct sk_buff *skb)
+ (atomic_read(&skb_shinfo(skb)->dataref) & SKB_DATAREF_MASK) != 1;
+ }
+
++static inline int skb_unclone(struct sk_buff *skb, gfp_t pri)
++{
++ might_sleep_if(pri & __GFP_WAIT);
++
++ if (skb_cloned(skb))
++ return pskb_expand_head(skb, 0, 0, pri);
++
++ return 0;
++}
++
+ /**
+ * skb_header_cloned - is the header a clone
+ * @skb: buffer to check
+diff --git a/include/net/dst.h b/include/net/dst.h
+index 5a900dd..49f443b 100644
+--- a/include/net/dst.h
++++ b/include/net/dst.h
+@@ -286,11 +286,22 @@ static inline int __xfrm_lookup(struct net *net, struct dst_entry **dst_p,
+ {
+ return 0;
+ }
++static inline struct xfrm_state *dst_xfrm(const struct dst_entry *dst)
++{
++ return NULL;
++}
++
+ #else
+ extern int xfrm_lookup(struct net *net, struct dst_entry **dst_p,
+ struct flowi *fl, struct sock *sk, int flags);
+ extern int __xfrm_lookup(struct net *net, struct dst_entry **dst_p,
+ struct flowi *fl, struct sock *sk, int flags);
++
++/* skb attached with this dst needs transformation if dst->xfrm is valid */
++static inline struct xfrm_state *dst_xfrm(const struct dst_entry *dst)
++{
++ return dst->xfrm;
++}
+ #endif
+ #endif
+
+diff --git a/include/net/ip.h b/include/net/ip.h
+index a7d4675..e6860b1 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -391,7 +391,7 @@ extern int compat_ip_getsockopt(struct sock *sk, int level,
+ int optname, char __user *optval, int __user *optlen);
+ extern int ip_ra_control(struct sock *sk, unsigned char on, void (*destructor)(struct sock *));
+
+-extern int ip_recv_error(struct sock *sk, struct msghdr *msg, int len);
++extern int ip_recv_error(struct sock *sk, struct msghdr *msg, int len, int *addr_len);
+ extern void ip_icmp_error(struct sock *sk, struct sk_buff *skb, int err,
+ __be16 port, u32 info, u8 *payload);
+ extern void ip_local_error(struct sock *sk, int err, __be32 daddr, __be16 dport,
+diff --git a/include/net/ipv6.h b/include/net/ipv6.h
+index 52d86da..cf928c4 100644
+--- a/include/net/ipv6.h
++++ b/include/net/ipv6.h
+@@ -567,7 +567,8 @@ extern int compat_ipv6_getsockopt(struct sock *sk,
+ extern int ip6_datagram_connect(struct sock *sk,
+ struct sockaddr *addr, int addr_len);
+
+-extern int ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len);
++extern int ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len,
++ int *addr_len);
+ extern void ipv6_icmp_error(struct sock *sk, struct sk_buff *skb, int err, __be16 port,
+ u32 info, u8 *payload);
+ extern void ipv6_local_error(struct sock *sk, int err, struct flowi *fl, u32 info);
+diff --git a/include/net/sctp/command.h b/include/net/sctp/command.h
+index 2c55a7e..0edc14d 100644
+--- a/include/net/sctp/command.h
++++ b/include/net/sctp/command.h
+@@ -108,6 +108,7 @@ typedef enum {
+ SCTP_CMD_UPDATE_INITTAG, /* Update peer inittag */
+ SCTP_CMD_SEND_MSG, /* Send the whole use message */
+ SCTP_CMD_SEND_NEXT_ASCONF, /* Send the next ASCONF after ACK */
++ SCTP_CMD_SET_ASOC, /* Restore association context */
+ SCTP_CMD_LAST
+ } sctp_verb_t;
+
+diff --git a/include/net/udp.h b/include/net/udp.h
+index f98abd2..702bea0 100644
+--- a/include/net/udp.h
++++ b/include/net/udp.h
+@@ -134,6 +134,7 @@ extern void udp_err(struct sk_buff *, u32);
+
+ extern int udp_sendmsg(struct kiocb *iocb, struct sock *sk,
+ struct msghdr *msg, size_t len);
++extern int udp_push_pending_frames(struct sock *sk);
+ extern void udp_flush_pending_frames(struct sock *sk);
+
+ extern int udp_rcv(struct sk_buff *skb);
+diff --git a/include/scsi/scsi_netlink.h b/include/scsi/scsi_netlink.h
+index 58ce8fe..5cb20cc 100644
+--- a/include/scsi/scsi_netlink.h
++++ b/include/scsi/scsi_netlink.h
+@@ -23,7 +23,7 @@
+ #define SCSI_NETLINK_H
+
+ #include <linux/netlink.h>
+-
++#include <linux/types.h>
+
+ /*
+ * This file intended to be included by both kernel and user space
+diff --git a/kernel/kmod.c b/kernel/kmod.c
+index 8ecc509..3da09a9 100644
+--- a/kernel/kmod.c
++++ b/kernel/kmod.c
+@@ -560,6 +560,10 @@ int call_usermodehelper_exec(struct subprocess_info *sub_info,
+ BUG_ON(atomic_read(&sub_info->cred->usage) != 1);
+ validate_creds(sub_info->cred);
+
++ if (!sub_info->path) {
++ call_usermodehelper_freeinfo(sub_info);
++ return -EINVAL;
++ }
+ helper_lock();
+ if (sub_info->path[0] == '\0')
+ goto out;
+diff --git a/kernel/ptrace.c b/kernel/ptrace.c
+index d9c8c47..4185220 100644
+--- a/kernel/ptrace.c
++++ b/kernel/ptrace.c
+@@ -187,7 +187,7 @@ int __ptrace_may_access(struct task_struct *task, unsigned int mode)
+ smp_rmb();
+ if (task->mm)
+ dumpable = get_dumpable(task->mm);
+- if (!dumpable && !capable(CAP_SYS_PTRACE))
++ if (dumpable != SUID_DUMP_USER && !capable(CAP_SYS_PTRACE))
+ return -EPERM;
+
+ return security_ptrace_access_check(task, mode);
+diff --git a/kernel/softirq.c b/kernel/softirq.c
+index d75c136..e4d5d8c 100644
+--- a/kernel/softirq.c
++++ b/kernel/softirq.c
+@@ -194,8 +194,12 @@ void local_bh_enable_ip(unsigned long ip)
+ EXPORT_SYMBOL(local_bh_enable_ip);
+
+ /*
+- * We restart softirq processing for at most 2 ms,
+- * and if need_resched() is not set.
++ * We restart softirq processing for at most MAX_SOFTIRQ_RESTART times,
++ * but break the loop if need_resched() is set or after 2 ms.
++ * The MAX_SOFTIRQ_TIME provides a nice upper bound in most cases, but in
++ * certain cases, such as stop_machine(), jiffies may cease to
++ * increment and so we need the MAX_SOFTIRQ_RESTART limit as
++ * well to make sure we eventually return from this method.
+ *
+ * These limits have been established via experimentation.
+ * The two things to balance is latency against fairness -
+@@ -203,6 +207,7 @@ EXPORT_SYMBOL(local_bh_enable_ip);
+ * should not be able to lock up the box.
+ */
+ #define MAX_SOFTIRQ_TIME msecs_to_jiffies(2)
++#define MAX_SOFTIRQ_RESTART 10
+
+ asmlinkage void __do_softirq(void)
+ {
+@@ -210,6 +215,7 @@ asmlinkage void __do_softirq(void)
+ __u32 pending;
+ unsigned long end = jiffies + MAX_SOFTIRQ_TIME;
+ int cpu;
++ int max_restart = MAX_SOFTIRQ_RESTART;
+
+ pending = local_softirq_pending();
+ account_system_vtime(current);
+@@ -254,7 +260,8 @@ restart:
+
+ pending = local_softirq_pending();
+ if (pending) {
+- if (time_before(jiffies, end) && !need_resched())
++ if (time_before(jiffies, end) && !need_resched() &&
++ --max_restart)
+ goto restart;
+
+ wakeup_softirqd();
+diff --git a/lib/nlattr.c b/lib/nlattr.c
+index 109d4fe..51b84de 100644
+--- a/lib/nlattr.c
++++ b/lib/nlattr.c
+@@ -299,9 +299,15 @@ int nla_memcmp(const struct nlattr *nla, const void *data,
+ */
+ int nla_strcmp(const struct nlattr *nla, const char *str)
+ {
+- int len = strlen(str) + 1;
+- int d = nla_len(nla) - len;
++ int len = strlen(str);
++ char *buf = nla_data(nla);
++ int attrlen = nla_len(nla);
++ int d;
+
++ if (attrlen > 0 && buf[attrlen - 1] == '\0')
++ attrlen--;
++
++ d = attrlen - len;
+ if (d == 0)
+ d = memcmp(nla_data(nla), str, len);
+
+diff --git a/lib/random32.c b/lib/random32.c
+index 217d5c4..b9275d2 100644
+--- a/lib/random32.c
++++ b/lib/random32.c
+@@ -96,7 +96,7 @@ void srandom32(u32 entropy)
+ */
+ for_each_possible_cpu (i) {
+ struct rnd_state *state = &per_cpu(net_rand_state, i);
+- state->s1 = __seed(state->s1 ^ entropy, 1);
++ state->s1 = __seed(state->s1 ^ entropy, 2);
+ }
+ }
+ EXPORT_SYMBOL(srandom32);
+@@ -113,9 +113,9 @@ static int __init random32_init(void)
+ struct rnd_state *state = &per_cpu(net_rand_state,i);
+
+ #define LCG(x) ((x) * 69069) /* super-duper LCG */
+- state->s1 = __seed(LCG(i + jiffies), 1);
+- state->s2 = __seed(LCG(state->s1), 7);
+- state->s3 = __seed(LCG(state->s2), 15);
++ state->s1 = __seed(LCG(i + jiffies), 2);
++ state->s2 = __seed(LCG(state->s1), 8);
++ state->s3 = __seed(LCG(state->s2), 16);
+
+ /* "warm it up" */
+ __random32(state);
+@@ -142,9 +142,9 @@ static int __init random32_reseed(void)
+ u32 seeds[3];
+
+ get_random_bytes(&seeds, sizeof(seeds));
+- state->s1 = __seed(seeds[0], 1);
+- state->s2 = __seed(seeds[1], 7);
+- state->s3 = __seed(seeds[2], 15);
++ state->s1 = __seed(seeds[0], 2);
++ state->s2 = __seed(seeds[1], 8);
++ state->s3 = __seed(seeds[2], 16);
+
+ /* mix it in */
+ __random32(state);
+diff --git a/mm/memory.c b/mm/memory.c
+index 6c836d3..085b068 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -1811,6 +1811,53 @@ int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
+ }
+ EXPORT_SYMBOL(remap_pfn_range);
+
++/**
++ * vm_iomap_memory - remap memory to userspace
++ * @vma: user vma to map to
++ * @start: start of area
++ * @len: size of area
++ *
++ * This is a simplified io_remap_pfn_range() for common driver use. The
++ * driver just needs to give us the physical memory range to be mapped,
++ * we'll figure out the rest from the vma information.
++ *
++ * NOTE! Some drivers might want to tweak vma->vm_page_prot first to get
++ * whatever write-combining details or similar.
++ */
++int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len)
++{
++ unsigned long vm_len, pfn, pages;
++
++ /* Check that the physical memory area passed in looks valid */
++ if (start + len < start)
++ return -EINVAL;
++ /*
++ * You *really* shouldn't map things that aren't page-aligned,
++ * but we've historically allowed it because IO memory might
++ * just have smaller alignment.
++ */
++ len += start & ~PAGE_MASK;
++ pfn = start >> PAGE_SHIFT;
++ pages = (len + ~PAGE_MASK) >> PAGE_SHIFT;
++ if (pfn + pages < pfn)
++ return -EINVAL;
++
++ /* We start the mapping 'vm_pgoff' pages into the area */
++ if (vma->vm_pgoff > pages)
++ return -EINVAL;
++ pfn += vma->vm_pgoff;
++ pages -= vma->vm_pgoff;
++
++ /* Can we fit all of the mapping? */
++ vm_len = vma->vm_end - vma->vm_start;
++ if (vm_len >> PAGE_SHIFT > pages)
++ return -EINVAL;
++
++ /* Ok, let it rip */
++ return io_remap_pfn_range(vma, vma->vm_start, pfn, vm_len, vma->vm_page_prot);
++}
++EXPORT_SYMBOL(vm_iomap_memory);
++
+ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd,
+ unsigned long addr, unsigned long end,
+ pte_fn_t fn, void *data)
+diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c
+index 4198ec5..9796ea4 100644
+--- a/net/8021q/vlan_dev.c
++++ b/net/8021q/vlan_dev.c
+@@ -220,6 +220,8 @@ vlan_dev_get_egress_qos_mask(struct net_device *dev, struct sk_buff *skb)
+ {
+ struct vlan_priority_tci_mapping *mp;
+
++ smp_rmb(); /* coupled with smp_wmb() in vlan_dev_set_egress_priority() */
++
+ mp = vlan_dev_info(dev)->egress_priority_map[(skb->priority & 0xF)];
+ while (mp) {
+ if (mp->priority == skb->priority) {
+@@ -418,6 +420,11 @@ int vlan_dev_set_egress_priority(const struct net_device *dev,
+ np->next = mp;
+ np->priority = skb_prio;
+ np->vlan_qos = vlan_qos;
++ /* Before inserting this element in hash table, make sure all its fields
++ * are committed to memory.
++ * coupled with smp_rmb() in vlan_dev_get_egress_qos_mask()
++ */
++ smp_wmb();
+ vlan->egress_priority_map[skb_prio & 0xF] = np;
+ if (vlan_qos)
+ vlan->nr_egress_mappings++;
+diff --git a/net/8021q/vlan_netlink.c b/net/8021q/vlan_netlink.c
+index a915048..1f13bcf 100644
+--- a/net/8021q/vlan_netlink.c
++++ b/net/8021q/vlan_netlink.c
+@@ -169,7 +169,7 @@ static size_t vlan_get_size(const struct net_device *dev)
+ struct vlan_dev_info *vlan = vlan_dev_info(dev);
+
+ return nla_total_size(2) + /* IFLA_VLAN_ID */
+- sizeof(struct ifla_vlan_flags) + /* IFLA_VLAN_FLAGS */
++ nla_total_size(sizeof(struct ifla_vlan_flags)) + /* IFLA_VLAN_FLAGS */
+ vlan_qos_map_size(vlan->nr_ingress_mappings) +
+ vlan_qos_map_size(vlan->nr_egress_mappings);
+ }
+diff --git a/net/appletalk/ddp.c b/net/appletalk/ddp.c
+index b1a4290..5eae360 100644
+--- a/net/appletalk/ddp.c
++++ b/net/appletalk/ddp.c
+@@ -1703,7 +1703,6 @@ static int atalk_recvmsg(struct kiocb *iocb, struct socket *sock, struct msghdr
+ size_t size, int flags)
+ {
+ struct sock *sk = sock->sk;
+- struct sockaddr_at *sat = (struct sockaddr_at *)msg->msg_name;
+ struct ddpehdr *ddp;
+ int copied = 0;
+ int offset = 0;
+@@ -1728,14 +1727,13 @@ static int atalk_recvmsg(struct kiocb *iocb, struct socket *sock, struct msghdr
+ }
+ err = skb_copy_datagram_iovec(skb, offset, msg->msg_iov, copied);
+
+- if (!err) {
+- if (sat) {
+- sat->sat_family = AF_APPLETALK;
+- sat->sat_port = ddp->deh_sport;
+- sat->sat_addr.s_node = ddp->deh_snode;
+- sat->sat_addr.s_net = ddp->deh_snet;
+- }
+- msg->msg_namelen = sizeof(*sat);
++ if (!err && msg->msg_name) {
++ struct sockaddr_at *sat = msg->msg_name;
++ sat->sat_family = AF_APPLETALK;
++ sat->sat_port = ddp->deh_sport;
++ sat->sat_addr.s_node = ddp->deh_snode;
++ sat->sat_addr.s_net = ddp->deh_snet;
++ msg->msg_namelen = sizeof(*sat);
+ }
+
+ skb_free_datagram(sk, skb); /* Free the datagram. */
+diff --git a/net/atm/common.c b/net/atm/common.c
+index 65737b8..0baf05e 100644
+--- a/net/atm/common.c
++++ b/net/atm/common.c
+@@ -473,8 +473,6 @@ int vcc_recvmsg(struct kiocb *iocb, struct socket *sock, struct msghdr *msg,
+ struct sk_buff *skb;
+ int copied, error = -EINVAL;
+
+- msg->msg_namelen = 0;
+-
+ if (sock->state != SS_CONNECTED)
+ return -ENOTCONN;
+ if (flags & ~MSG_DONTWAIT) /* only handle MSG_DONTWAIT */
+diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c
+index 8613bd1..6b9d62b 100644
+--- a/net/ax25/af_ax25.c
++++ b/net/ax25/af_ax25.c
+@@ -1648,11 +1648,11 @@ static int ax25_recvmsg(struct kiocb *iocb, struct socket *sock,
+
+ skb_copy_datagram_iovec(skb, 0, msg->msg_iov, copied);
+
+- if (msg->msg_namelen != 0) {
+- struct sockaddr_ax25 *sax = (struct sockaddr_ax25 *)msg->msg_name;
++ if (msg->msg_name) {
+ ax25_digi digi;
+ ax25_address src;
+ const unsigned char *mac = skb_mac_header(skb);
++ struct sockaddr_ax25 *sax = msg->msg_name;
+
+ memset(sax, 0, sizeof(struct full_sockaddr_ax25));
+ ax25_addr_parse(mac + 1, skb->data - mac - 1, &src, NULL,
+diff --git a/net/bluetooth/af_bluetooth.c b/net/bluetooth/af_bluetooth.c
+index d7239dd..143b8a7 100644
+--- a/net/bluetooth/af_bluetooth.c
++++ b/net/bluetooth/af_bluetooth.c
+@@ -240,8 +240,6 @@ int bt_sock_recvmsg(struct kiocb *iocb, struct socket *sock,
+ if (flags & (MSG_OOB))
+ return -EOPNOTSUPP;
+
+- msg->msg_namelen = 0;
+-
+ if (!(skb = skb_recv_datagram(sk, flags, noblock, &err))) {
+ if (sk->sk_shutdown & RCV_SHUTDOWN)
+ return 0;
+diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c
+index 45caaaa..0e0f517 100644
+--- a/net/bluetooth/hci_sock.c
++++ b/net/bluetooth/hci_sock.c
+@@ -370,8 +370,6 @@ static int hci_sock_recvmsg(struct kiocb *iocb, struct socket *sock,
+ if (!(skb = skb_recv_datagram(sk, flags, noblock, &err)))
+ return err;
+
+- msg->msg_namelen = 0;
+-
+ copied = skb->len;
+ if (len < copied) {
+ msg->msg_flags |= MSG_TRUNC;
+diff --git a/net/bluetooth/rfcomm/sock.c b/net/bluetooth/rfcomm/sock.c
+index 1db0132..3fabaad 100644
+--- a/net/bluetooth/rfcomm/sock.c
++++ b/net/bluetooth/rfcomm/sock.c
+@@ -652,15 +652,12 @@ static int rfcomm_sock_recvmsg(struct kiocb *iocb, struct socket *sock,
+
+ if (test_and_clear_bit(RFCOMM_DEFER_SETUP, &d->flags)) {
+ rfcomm_dlc_accept(d);
+- msg->msg_namelen = 0;
+ return 0;
+ }
+
+ if (flags & MSG_OOB)
+ return -EOPNOTSUPP;
+
+- msg->msg_namelen = 0;
+-
+ BT_DBG("sk %p size %zu", sk, size);
+
+ lock_sock(sk);
+diff --git a/net/bridge/br_if.c b/net/bridge/br_if.c
+index 4a9f527..c01e65d 100644
+--- a/net/bridge/br_if.c
++++ b/net/bridge/br_if.c
+@@ -162,6 +162,8 @@ static void del_br(struct net_bridge *br)
+ del_nbp(p);
+ }
+
++ br_fdb_delete_by_port(br, NULL, 1);
++
+ del_timer_sync(&br->gc_timer);
+
+ br_sysfs_delbr(br->dev);
+diff --git a/net/bridge/br_stp.c b/net/bridge/br_stp.c
+index c7d6bfc..a67e6ce 100644
+--- a/net/bridge/br_stp.c
++++ b/net/bridge/br_stp.c
+@@ -192,7 +192,7 @@ static inline void br_record_config_information(struct net_bridge_port *p,
+ p->designated_age = jiffies + bpdu->message_age;
+
+ mod_timer(&p->message_age_timer, jiffies
+- + (p->br->max_age - bpdu->message_age));
++ + (bpdu->max_age - bpdu->message_age));
+ }
+
+ /* called under bridge lock */
+diff --git a/net/compat.c b/net/compat.c
+index 9559afc..e9672c8 100644
+--- a/net/compat.c
++++ b/net/compat.c
+@@ -69,6 +69,8 @@ int get_compat_msghdr(struct msghdr *kmsg, struct compat_msghdr __user *umsg)
+ __get_user(kmsg->msg_controllen, &umsg->msg_controllen) ||
+ __get_user(kmsg->msg_flags, &umsg->msg_flags))
+ return -EFAULT;
++ if (kmsg->msg_namelen > sizeof(struct sockaddr_storage))
++ kmsg->msg_namelen = sizeof(struct sockaddr_storage);
+ kmsg->msg_name = compat_ptr(tmp1);
+ kmsg->msg_iov = compat_ptr(tmp2);
+ kmsg->msg_control = compat_ptr(tmp3);
+@@ -89,7 +91,8 @@ int verify_compat_iovec(struct msghdr *kern_msg, struct iovec *kern_iov,
+ if (err < 0)
+ return err;
+ }
+- kern_msg->msg_name = kern_address;
++ if (kern_msg->msg_name)
++ kern_msg->msg_name = kern_address;
+ } else
+ kern_msg->msg_name = NULL;
+
+diff --git a/net/core/dev.c b/net/core/dev.c
+index d775563..d250444 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -3388,7 +3388,7 @@ static void dev_change_rx_flags(struct net_device *dev, int flags)
+ {
+ const struct net_device_ops *ops = dev->netdev_ops;
+
+- if ((dev->flags & IFF_UP) && ops->ndo_change_rx_flags)
++ if (ops->ndo_change_rx_flags)
+ ops->ndo_change_rx_flags(dev, flags);
+ }
+
+diff --git a/net/core/drop_monitor.c b/net/core/drop_monitor.c
+index 0a113f2..e65fa2f 100644
+--- a/net/core/drop_monitor.c
++++ b/net/core/drop_monitor.c
+@@ -63,7 +63,6 @@ static struct genl_family net_drop_monitor_family = {
+ .hdrsize = 0,
+ .name = "NET_DM",
+ .version = 2,
+- .maxattr = NET_DM_CMD_MAX,
+ };
+
+ static DEFINE_PER_CPU(struct per_cpu_dm_data, dm_cpu_data);
+diff --git a/net/core/fib_rules.c b/net/core/fib_rules.c
+index bd30938..06bdee7 100644
+--- a/net/core/fib_rules.c
++++ b/net/core/fib_rules.c
+@@ -381,7 +381,8 @@ static int fib_nl_delrule(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg)
+ if (frh->action && (frh->action != rule->action))
+ continue;
+
+- if (frh->table && (frh_get_table(frh, tb) != rule->table))
++ if (frh_get_table(frh, tb) &&
++ (frh_get_table(frh, tb) != rule->table))
+ continue;
+
+ if (tb[FRA_PRIORITY] &&
+@@ -632,6 +633,13 @@ static int fib_rules_event(struct notifier_block *this, unsigned long event,
+ attach_rules(&ops->rules_list, dev);
+ break;
+
++ case NETDEV_CHANGENAME:
++ list_for_each_entry(ops, &net->rules_ops, list) {
++ detach_rules(&ops->rules_list, dev);
++ attach_rules(&ops->rules_list, dev);
++ }
++ break;
++
+ case NETDEV_UNREGISTER:
+ list_for_each_entry(ops, &net->rules_ops, list)
+ detach_rules(&ops->rules_list, dev);
+diff --git a/net/core/iovec.c b/net/core/iovec.c
+index f911e66..39369e9 100644
+--- a/net/core/iovec.c
++++ b/net/core/iovec.c
+@@ -47,7 +47,8 @@ int verify_iovec(struct msghdr *m, struct iovec *iov, struct sockaddr *address,
+ if (err < 0)
+ return err;
+ }
+- m->msg_name = address;
++ if (m->msg_name)
++ m->msg_name = address;
+ } else {
+ m->msg_name = NULL;
+ }
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index e696250..fc9feaa 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -222,7 +222,7 @@ static void neigh_flush_dev(struct neigh_table *tbl, struct net_device *dev)
+ we must kill timers etc. and move
+ it to safe state.
+ */
+- skb_queue_purge(&n->arp_queue);
++ __skb_queue_purge(&n->arp_queue);
+ n->output = neigh_blackhole;
+ if (n->nud_state & NUD_VALID)
+ n->nud_state = NUD_NOARP;
+@@ -276,7 +276,7 @@ static struct neighbour *neigh_alloc(struct neigh_table *tbl)
+ if (!n)
+ goto out_entries;
+
+- skb_queue_head_init(&n->arp_queue);
++ __skb_queue_head_init(&n->arp_queue);
+ rwlock_init(&n->lock);
+ n->updated = n->used = now;
+ n->nud_state = NUD_NONE;
+@@ -646,7 +646,9 @@ void neigh_destroy(struct neighbour *neigh)
+ kfree(hh);
+ }
+
+- skb_queue_purge(&neigh->arp_queue);
++ write_lock_bh(&neigh->lock);
++ __skb_queue_purge(&neigh->arp_queue);
++ write_unlock_bh(&neigh->lock);
+
+ dev_put(neigh->dev);
+ neigh_parms_put(neigh->parms);
+@@ -789,7 +791,7 @@ static void neigh_invalidate(struct neighbour *neigh)
+ neigh->ops->error_report(neigh, skb);
+ write_lock(&neigh->lock);
+ }
+- skb_queue_purge(&neigh->arp_queue);
++ __skb_queue_purge(&neigh->arp_queue);
+ }
+
+ /* Called when a timer expires for a neighbour entry. */
+@@ -1105,7 +1107,7 @@ int neigh_update(struct neighbour *neigh, const u8 *lladdr, u8 new,
+ n1->output(skb);
+ write_lock_bh(&neigh->lock);
+ }
+- skb_queue_purge(&neigh->arp_queue);
++ __skb_queue_purge(&neigh->arp_queue);
+ }
+ out:
+ if (update_isrouter) {
+diff --git a/net/core/pktgen.c b/net/core/pktgen.c
+index 6a993b1..f776b99 100644
+--- a/net/core/pktgen.c
++++ b/net/core/pktgen.c
+@@ -2495,6 +2495,8 @@ static int process_ipsec(struct pktgen_dev *pkt_dev,
+ if (x) {
+ int ret;
+ __u8 *eth;
++ struct iphdr *iph;
++
+ nhead = x->props.header_len - skb_headroom(skb);
+ if (nhead > 0) {
+ ret = pskb_expand_head(skb, nhead, 0, GFP_ATOMIC);
+@@ -2517,6 +2519,11 @@ static int process_ipsec(struct pktgen_dev *pkt_dev,
+ eth = (__u8 *) skb_push(skb, ETH_HLEN);
+ memcpy(eth, pkt_dev->hh, 12);
+ *(u16 *) ð[12] = protocol;
++
++ /* Update IPv4 header len as well as checksum value */
++ iph = ip_hdr(skb);
++ iph->tot_len = htons(skb->len - ETH_HLEN);
++ ip_send_check(iph);
+ }
+ }
+ return 1;
+diff --git a/net/core/sysctl_net_core.c b/net/core/sysctl_net_core.c
+index 7db1de0..e2eaf29 100644
+--- a/net/core/sysctl_net_core.c
++++ b/net/core/sysctl_net_core.c
+@@ -14,6 +14,9 @@
+ #include <net/ip.h>
+ #include <net/sock.h>
+
++static int zero = 0;
++static int ushort_max = 65535;
++
+ static struct ctl_table net_core_table[] = {
+ #ifdef CONFIG_NET
+ {
+@@ -116,7 +119,9 @@ static struct ctl_table netns_core_table[] = {
+ .data = &init_net.core.sysctl_somaxconn,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+- .proc_handler = proc_dointvec
++ .extra1 = &zero,
++ .extra2 = &ushort_max,
++ .proc_handler = proc_dointvec_minmax
+ },
+ { .ctl_name = 0 }
+ };
+diff --git a/net/ipv4/datagram.c b/net/ipv4/datagram.c
+index 5e6c5a0..30aeb26 100644
+--- a/net/ipv4/datagram.c
++++ b/net/ipv4/datagram.c
+@@ -52,7 +52,7 @@ int ip4_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ inet->sport, usin->sin_port, sk, 1);
+ if (err) {
+ if (err == -ENETUNREACH)
+- IP_INC_STATS_BH(sock_net(sk), IPSTATS_MIB_OUTNOROUTES);
++ IP_INC_STATS(sock_net(sk), IPSTATS_MIB_OUTNOROUTES);
+ return err;
+ }
+
+diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c
+index 169da93..c07be7c 100644
+--- a/net/ipv4/igmp.c
++++ b/net/ipv4/igmp.c
+@@ -697,7 +697,7 @@ static void igmp_gq_timer_expire(unsigned long data)
+
+ in_dev->mr_gq_running = 0;
+ igmpv3_send_report(in_dev, NULL);
+- __in_dev_put(in_dev);
++ in_dev_put(in_dev);
+ }
+
+ static void igmp_ifc_timer_expire(unsigned long data)
+@@ -709,7 +709,7 @@ static void igmp_ifc_timer_expire(unsigned long data)
+ in_dev->mr_ifc_count--;
+ igmp_ifc_start_timer(in_dev, IGMP_Unsolicited_Report_Interval);
+ }
+- __in_dev_put(in_dev);
++ in_dev_put(in_dev);
+ }
+
+ static void igmp_ifc_event(struct in_device *in_dev)
+diff --git a/net/ipv4/inet_diag.c b/net/ipv4/inet_diag.c
+index dba56d2..65ee65a 100644
+--- a/net/ipv4/inet_diag.c
++++ b/net/ipv4/inet_diag.c
+@@ -814,7 +814,7 @@ next_normal:
+ ++num;
+ }
+
+- if (r->idiag_states & TCPF_TIME_WAIT) {
++ if (r->idiag_states & (TCPF_TIME_WAIT | TCPF_FIN_WAIT2)) {
+ struct inet_timewait_sock *tw;
+
+ inet_twsk_for_each(tw, node,
+@@ -822,6 +822,8 @@ next_normal:
+
+ if (num < s_num)
+ goto next_dying;
++ if (!(r->idiag_states & (1 << tw->tw_substate)))
++ goto next_dying;
+ if (r->id.idiag_sport != tw->tw_sport &&
+ r->id.idiag_sport)
+ goto next_dying;
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index d717267..03fd04a 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -247,7 +247,7 @@ begintw:
+ }
+ if (unlikely(!INET_TW_MATCH(sk, net, hash, acookie,
+ saddr, daddr, ports, dif))) {
+- sock_put(sk);
++ inet_twsk_put(inet_twsk(sk));
+ goto begintw;
+ }
+ goto out;
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index 7dde039..faa6623 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -320,13 +320,13 @@ int ip_queue_xmit(struct sk_buff *skb, int ipfragok)
+ /* Skip all of this if the packet is already routed,
+ * f.e. by something like SCTP.
+ */
++ rcu_read_lock();
+ rt = skb_rtable(skb);
+ if (rt != NULL)
+ goto packet_routed;
+
+ /* Make sure we can route this packet. */
+ rt = (struct rtable *)__sk_dst_check(sk, 0);
+- rcu_read_lock();
+ inet_opt = rcu_dereference(inet->inet_opt);
+ if (rt == NULL) {
+ __be32 daddr;
+@@ -875,7 +875,7 @@ int ip_append_data(struct sock *sk,
+ skb = skb_peek_tail(&sk->sk_write_queue);
+
+ inet->cork.length += length;
+- if (((length > mtu) || (skb && skb_is_gso(skb))) &&
++ if (((length > mtu) || (skb && skb_has_frags(skb))) &&
+ (sk->sk_protocol == IPPROTO_UDP) &&
+ (rt->u.dst.dev->features & NETIF_F_UFO)) {
+ err = ip_ufo_append_data(sk, getfrag, from, length, hh_len,
+diff --git a/net/ipv4/ip_sockglue.c b/net/ipv4/ip_sockglue.c
+index 099e6c3..d5a179b 100644
+--- a/net/ipv4/ip_sockglue.c
++++ b/net/ipv4/ip_sockglue.c
+@@ -356,7 +356,7 @@ void ip_local_error(struct sock *sk, int err, __be32 daddr, __be16 port, u32 inf
+ /*
+ * Handle MSG_ERRQUEUE
+ */
+-int ip_recv_error(struct sock *sk, struct msghdr *msg, int len)
++int ip_recv_error(struct sock *sk, struct msghdr *msg, int len, int *addr_len)
+ {
+ struct sock_exterr_skb *serr;
+ struct sk_buff *skb, *skb2;
+@@ -393,6 +393,7 @@ int ip_recv_error(struct sock *sk, struct msghdr *msg, int len)
+ serr->addr_offset);
+ sin->sin_port = serr->port;
+ memset(&sin->sin_zero, 0, sizeof(sin->sin_zero));
++ *addr_len = sizeof(*sin);
+ }
+
+ memcpy(&errhdr.ee, &serr->ee, sizeof(struct sock_extended_err));
+diff --git a/net/ipv4/ipip.c b/net/ipv4/ipip.c
+index 860b5c5..49aa1ad 100644
+--- a/net/ipv4/ipip.c
++++ b/net/ipv4/ipip.c
+@@ -408,6 +408,7 @@ static netdev_tx_t ipip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev)
+ if (tos&1)
+ tos = old_iph->tos;
+
++ memset(&(IPCB(skb)->opt), 0, sizeof(IPCB(skb)->opt));
+ if (!dst) {
+ /* NBMA tunnel */
+ if ((rt = skb_rtable(skb)) == NULL) {
+@@ -494,7 +495,6 @@ static netdev_tx_t ipip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev)
+ skb->transport_header = skb->network_header;
+ skb_push(skb, sizeof(struct iphdr));
+ skb_reset_network_header(skb);
+- memset(&(IPCB(skb)->opt), 0, sizeof(IPCB(skb)->opt));
+ IPCB(skb)->flags &= ~(IPSKB_XFRM_TUNNEL_SIZE | IPSKB_XFRM_TRANSFORMED |
+ IPSKB_REROUTED);
+ skb_dst_drop(skb);
+diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c
+index 07ab583..8065efa 100644
+--- a/net/ipv4/raw.c
++++ b/net/ipv4/raw.c
+@@ -681,11 +681,8 @@ static int raw_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
+ if (flags & MSG_OOB)
+ goto out;
+
+- if (addr_len)
+- *addr_len = sizeof(*sin);
+-
+ if (flags & MSG_ERRQUEUE) {
+- err = ip_recv_error(sk, msg, len);
++ err = ip_recv_error(sk, msg, len, addr_len);
+ goto out;
+ }
+
+@@ -711,6 +708,7 @@ static int raw_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
+ sin->sin_addr.s_addr = ip_hdr(skb)->saddr;
+ sin->sin_port = 0;
+ memset(&sin->sin_zero, 0, sizeof(sin->sin_zero));
++ *addr_len = sizeof(*sin);
+ }
+ if (inet->cmsg_flags)
+ ip_cmsg_recv(msg, skb);
+diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
+index 2dcf04d..910fa54 100644
+--- a/net/ipv4/sysctl_net_ipv4.c
++++ b/net/ipv4/sysctl_net_ipv4.c
+@@ -23,6 +23,8 @@
+
+ static int zero;
+ static int tcp_retr1_max = 255;
++static int tcp_syn_retries_min = 1;
++static int tcp_syn_retries_max = MAX_TCP_SYNCNT;
+ static int ip_local_port_range_min[] = { 1, 1 };
+ static int ip_local_port_range_max[] = { 65535, 65535 };
+
+@@ -237,7 +239,9 @@ static struct ctl_table ipv4_table[] = {
+ .data = &ipv4_config.no_pmtu_disc,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+- .proc_handler = proc_dointvec
++ .proc_handler = proc_dointvec_minmax,
++ .extra1 = &tcp_syn_retries_min,
++ .extra2 = &tcp_syn_retries_max
+ },
+ {
+ .ctl_name = NET_IPV4_NONLOCAL_BIND,
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 6232462..fc18410 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -2826,7 +2826,11 @@ int tcp_md5_hash_skb_data(struct tcp_md5sig_pool *hp,
+
+ for (i = 0; i < shi->nr_frags; ++i) {
+ const struct skb_frag_struct *f = &shi->frags[i];
+- sg_set_page(&sg, f->page, f->size, f->page_offset);
++ unsigned int offset = f->page_offset;
++ struct page *page = f->page + (offset >> PAGE_SHIFT);
++
++ sg_set_page(&sg, page, f->size,
++ offset_in_page(offset));
+ if (crypto_hash_update(desc, &sg, f->size))
+ return 1;
+ }
+diff --git a/net/ipv4/tcp_cubic.c b/net/ipv4/tcp_cubic.c
+index 71d5f2f..db41113 100644
+--- a/net/ipv4/tcp_cubic.c
++++ b/net/ipv4/tcp_cubic.c
+@@ -90,6 +90,7 @@ struct bictcp {
+ u32 ack_cnt; /* number of acks */
+ u32 tcp_cwnd; /* estimated tcp cwnd */
+ #define ACK_RATIO_SHIFT 4
++#define ACK_RATIO_LIMIT (32u << ACK_RATIO_SHIFT)
+ u16 delayed_ack; /* estimate the ratio of Packets/ACKs << 4 */
+ u8 sample_cnt; /* number of samples to decide curr_rtt */
+ u8 found; /* the exit point is found? */
+@@ -379,8 +380,12 @@ static void bictcp_acked(struct sock *sk, u32 cnt, s32 rtt_us)
+ u32 delay;
+
+ if (icsk->icsk_ca_state == TCP_CA_Open) {
+- cnt -= ca->delayed_ack >> ACK_RATIO_SHIFT;
+- ca->delayed_ack += cnt;
++ u32 ratio = ca->delayed_ack;
++
++ ratio -= ca->delayed_ack >> ACK_RATIO_SHIFT;
++ ratio += cnt;
++
++ ca->delayed_ack = clamp(ratio, 1U, ACK_RATIO_LIMIT);
+ }
+
+ /* Some calls are for duplicates without timetamps */
+@@ -388,7 +393,7 @@ static void bictcp_acked(struct sock *sk, u32 cnt, s32 rtt_us)
+ return;
+
+ /* Discard delay samples right after fast recovery */
+- if ((s32)(tcp_time_stamp - ca->epoch_start) < HZ)
++ if (ca->epoch_start && (s32)(tcp_time_stamp - ca->epoch_start) < HZ)
+ return;
+
+ delay = usecs_to_jiffies(rtt_us) << 3;
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index d746d3b3..e60f0fd 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -174,7 +174,7 @@ int tcp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ inet->sport, usin->sin_port, sk, 1);
+ if (tmp < 0) {
+ if (tmp == -ENETUNREACH)
+- IP_INC_STATS_BH(sock_net(sk), IPSTATS_MIB_OUTNOROUTES);
++ IP_INC_STATS(sock_net(sk), IPSTATS_MIB_OUTNOROUTES);
+ return tmp;
+ }
+
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 38a23e4..0fc0a73 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -744,6 +744,9 @@ static void tcp_queue_skb(struct sock *sk, struct sk_buff *skb)
+ static void tcp_set_skb_tso_segs(struct sock *sk, struct sk_buff *skb,
+ unsigned int mss_now)
+ {
++ /* Make sure we own this skb before messing gso_size/gso_segs */
++ WARN_ON_ONCE(skb_cloned(skb));
++
+ if (skb->len <= mss_now || !sk_can_gso(sk) ||
+ skb->ip_summed == CHECKSUM_NONE) {
+ /* Avoid the costly divide in the normal
+@@ -824,9 +827,7 @@ int tcp_fragment(struct sock *sk, struct sk_buff *skb, u32 len,
+ if (nsize < 0)
+ nsize = 0;
+
+- if (skb_cloned(skb) &&
+- skb_is_nonlinear(skb) &&
+- pskb_expand_head(skb, 0, 0, GFP_ATOMIC))
++ if (skb_unclone(skb, GFP_ATOMIC))
+ return -ENOMEM;
+
+ /* Get a new skb... force flag on. */
+@@ -948,11 +949,9 @@ int tcp_trim_head(struct sock *sk, struct sk_buff *skb, u32 len)
+ sk_mem_uncharge(sk, len);
+ sock_set_flag(sk, SOCK_QUEUE_SHRUNK);
+
+- /* Any change of skb->len requires recalculation of tso
+- * factor and mss.
+- */
++ /* Any change of skb->len requires recalculation of tso factor. */
+ if (tcp_skb_pcount(skb) > 1)
+- tcp_set_skb_tso_segs(sk, skb, tcp_current_mss(sk));
++ tcp_set_skb_tso_segs(sk, skb, tcp_skb_mss(skb));
+
+ return 0;
+ }
+@@ -1932,6 +1931,8 @@ int tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb)
+ int oldpcount = tcp_skb_pcount(skb);
+
+ if (unlikely(oldpcount > 1)) {
++ if (skb_unclone(skb, GFP_ATOMIC))
++ return -ENOMEM;
+ tcp_init_tso_segs(sk, skb, cur_mss);
+ tcp_adjust_pcount(sk, skb, oldpcount - tcp_skb_pcount(skb));
+ }
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index af559e0..0b2e07f 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -513,7 +513,7 @@ static void udp4_hwcsum_outgoing(struct sock *sk, struct sk_buff *skb,
+ /*
+ * Push out all pending data as one UDP datagram. Socket is locked.
+ */
+-static int udp_push_pending_frames(struct sock *sk)
++int udp_push_pending_frames(struct sock *sk)
+ {
+ struct udp_sock *up = udp_sk(sk);
+ struct inet_sock *inet = inet_sk(sk);
+@@ -575,6 +575,7 @@ out:
+ up->pending = 0;
+ return err;
+ }
++EXPORT_SYMBOL(udp_push_pending_frames);
+
+ int udp_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
+ size_t len)
+@@ -723,7 +724,7 @@ int udp_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
+ err = ip_route_output_flow(net, &rt, &fl, sk, 1);
+ if (err) {
+ if (err == -ENETUNREACH)
+- IP_INC_STATS_BH(net, IPSTATS_MIB_OUTNOROUTES);
++ IP_INC_STATS(net, IPSTATS_MIB_OUTNOROUTES);
+ goto out;
+ }
+
+@@ -941,14 +942,8 @@ int udp_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
+ int err;
+ int is_udplite = IS_UDPLITE(sk);
+
+- /*
+- * Check any passed addresses
+- */
+- if (addr_len)
+- *addr_len = sizeof(*sin);
+-
+ if (flags & MSG_ERRQUEUE)
+- return ip_recv_error(sk, msg, len);
++ return ip_recv_error(sk, msg, len, addr_len);
+
+ try_again:
+ skb = __skb_recv_datagram(sk, flags | (noblock ? MSG_DONTWAIT : 0),
+@@ -1001,6 +996,7 @@ try_again:
+ sin->sin_port = udp_hdr(skb)->source;
+ sin->sin_addr.s_addr = ip_hdr(skb)->saddr;
+ memset(sin->sin_zero, 0, sizeof(sin->sin_zero));
++ *addr_len = sizeof(*sin);
+ }
+ if (inet->cmsg_flags)
+ ip_cmsg_recv(msg, skb);
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 8ac3d09..e8c4fd9 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -920,12 +920,10 @@ retry:
+ if (ifp->flags & IFA_F_OPTIMISTIC)
+ addr_flags |= IFA_F_OPTIMISTIC;
+
+- ift = !max_addresses ||
+- ipv6_count_addresses(idev) < max_addresses ?
+- ipv6_add_addr(idev, &addr, tmp_plen,
+- ipv6_addr_type(&addr)&IPV6_ADDR_SCOPE_MASK,
+- addr_flags) : NULL;
+- if (!ift || IS_ERR(ift)) {
++ ift = ipv6_add_addr(idev, &addr, tmp_plen,
++ ipv6_addr_type(&addr)&IPV6_ADDR_SCOPE_MASK,
++ addr_flags);
++ if (IS_ERR(ift)) {
+ in6_ifa_put(ifp);
+ in6_dev_put(idev);
+ printk(KERN_INFO
+diff --git a/net/ipv6/datagram.c b/net/ipv6/datagram.c
+index e2bdc6d..5da306b 100644
+--- a/net/ipv6/datagram.c
++++ b/net/ipv6/datagram.c
+@@ -281,7 +281,7 @@ void ipv6_local_error(struct sock *sk, int err, struct flowi *fl, u32 info)
+ /*
+ * Handle MSG_ERRQUEUE
+ */
+-int ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len)
++int ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len, int *addr_len)
+ {
+ struct ipv6_pinfo *np = inet6_sk(sk);
+ struct sock_exterr_skb *serr;
+@@ -333,6 +333,7 @@ int ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len)
+ htonl(0xffff),
+ *(__be32 *)(nh + serr->addr_offset));
+ }
++ *addr_len = sizeof(*sin);
+ }
+
+ memcpy(&errhdr.ee, &serr->ee, sizeof(struct sock_extended_err));
+@@ -341,6 +342,7 @@ int ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len)
+ if (serr->ee.ee_origin != SO_EE_ORIGIN_LOCAL) {
+ sin->sin6_family = AF_INET6;
+ sin->sin6_flowinfo = 0;
++ sin->sin6_port = 0;
+ sin->sin6_scope_id = 0;
+ if (serr->ee.ee_origin == SO_EE_ORIGIN_ICMP6) {
+ ipv6_addr_copy(&sin->sin6_addr, &ipv6_hdr(skb)->saddr);
+diff --git a/net/ipv6/icmp.c b/net/ipv6/icmp.c
+index f23ebbe..376a4b6 100644
+--- a/net/ipv6/icmp.c
++++ b/net/ipv6/icmp.c
+@@ -903,6 +903,14 @@ static const struct icmp6_err {
+ .err = ECONNREFUSED,
+ .fatal = 1,
+ },
++ { /* POLICY_FAIL */
++ .err = EACCES,
++ .fatal = 1,
++ },
++ { /* REJECT_ROUTE */
++ .err = EACCES,
++ .fatal = 1,
++ },
+ };
+
+ int icmpv6_err_convert(u8 type, u8 code, int *err)
+@@ -914,7 +922,7 @@ int icmpv6_err_convert(u8 type, u8 code, int *err)
+ switch (type) {
+ case ICMPV6_DEST_UNREACH:
+ fatal = 1;
+- if (code <= ICMPV6_PORT_UNREACH) {
++ if (code < ARRAY_SIZE(tab_unreach)) {
+ *err = tab_unreach[code].err;
+ fatal = tab_unreach[code].fatal;
+ }
+diff --git a/net/ipv6/inet6_connection_sock.c b/net/ipv6/inet6_connection_sock.c
+index cc4797d..59f4063 100644
+--- a/net/ipv6/inet6_connection_sock.c
++++ b/net/ipv6/inet6_connection_sock.c
+@@ -57,7 +57,7 @@ EXPORT_SYMBOL_GPL(inet6_csk_bind_conflict);
+ * request_sock (formerly open request) hash tables.
+ */
+ static u32 inet6_synq_hash(const struct in6_addr *raddr, const __be16 rport,
+- const u32 rnd, const u16 synq_hsize)
++ const u32 rnd, const u32 synq_hsize)
+ {
+ u32 a = (__force u32)raddr->s6_addr32[0];
+ u32 b = (__force u32)raddr->s6_addr32[1];
+diff --git a/net/ipv6/inet6_hashtables.c b/net/ipv6/inet6_hashtables.c
+index 093e9b2..93765577 100644
+--- a/net/ipv6/inet6_hashtables.c
++++ b/net/ipv6/inet6_hashtables.c
+@@ -104,7 +104,7 @@ begintw:
+ goto out;
+ }
+ if (!INET6_TW_MATCH(sk, net, hash, saddr, daddr, ports, dif)) {
+- sock_put(sk);
++ inet_twsk_put(inet_twsk(sk));
+ goto begintw;
+ }
+ goto out;
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index 0e93ca5..0a36d8d 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -846,14 +846,22 @@ static struct fib6_node * fib6_lookup_1(struct fib6_node *root,
+
+ if (ipv6_prefix_equal(&key->addr, args->addr, key->plen)) {
+ #ifdef CONFIG_IPV6_SUBTREES
+- if (fn->subtree)
+- fn = fib6_lookup_1(fn->subtree, args + 1);
++ if (fn->subtree) {
++ struct fib6_node *sfn;
++ sfn = fib6_lookup_1(fn->subtree,
++ args + 1);
++ if (!sfn)
++ goto backtrack;
++ fn = sfn;
++ }
+ #endif
+- if (!fn || fn->fn_flags & RTN_RTINFO)
++ if (fn->fn_flags & RTN_RTINFO)
+ return fn;
+ }
+ }
+-
++#ifdef CONFIG_IPV6_SUBTREES
++backtrack:
++#endif
+ if (fn->fn_flags & RTN_ROOT)
+ break;
+
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 6ba0fe2..6dff3d7 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -91,8 +91,8 @@ static int ip6_output_finish(struct sk_buff *skb)
+ else if (dst->neighbour)
+ return dst->neighbour->output(skb);
+
+- IP6_INC_STATS_BH(dev_net(dst->dev),
+- ip6_dst_idev(dst), IPSTATS_MIB_OUTNOROUTES);
++ IP6_INC_STATS(dev_net(dst->dev),
++ ip6_dst_idev(dst), IPSTATS_MIB_OUTNOROUTES);
+ kfree_skb(skb);
+ return -EINVAL;
+
+@@ -920,11 +920,17 @@ static struct dst_entry *ip6_sk_dst_check(struct sock *sk,
+ struct flowi *fl)
+ {
+ struct ipv6_pinfo *np = inet6_sk(sk);
+- struct rt6_info *rt = (struct rt6_info *)dst;
++ struct rt6_info *rt;
+
+ if (!dst)
+ goto out;
+
++ if (dst->ops->family != AF_INET6) {
++ dst_release(dst);
++ return NULL;
++ }
++
++ rt = (struct rt6_info *)dst;
+ /* Yes, checking route validity in not connected
+ * case is not very simple. Take into account,
+ * that we do not support routing by source, TOS,
+@@ -1080,6 +1086,8 @@ static inline int ip6_ufo_append_data(struct sock *sk,
+ * udp datagram
+ */
+ if ((skb = skb_peek_tail(&sk->sk_write_queue)) == NULL) {
++ struct frag_hdr fhdr;
++
+ skb = sock_alloc_send_skb(sk,
+ hh_len + fragheaderlen + transhdrlen + 20,
+ (flags & MSG_DONTWAIT), &err);
+@@ -1101,12 +1109,6 @@ static inline int ip6_ufo_append_data(struct sock *sk,
+ skb->ip_summed = CHECKSUM_PARTIAL;
+ skb->csum = 0;
+ sk->sk_sndmsg_off = 0;
+- }
+-
+- err = skb_append_datato_frags(sk,skb, getfrag, from,
+- (length - transhdrlen));
+- if (!err) {
+- struct frag_hdr fhdr;
+
+ /* Specify the length of each IPv6 datagram fragment.
+ * It has to be a multiple of 8.
+@@ -1117,15 +1119,10 @@ static inline int ip6_ufo_append_data(struct sock *sk,
+ ipv6_select_ident(&fhdr, rt);
+ skb_shinfo(skb)->ip6_frag_id = fhdr.identification;
+ __skb_queue_tail(&sk->sk_write_queue, skb);
+-
+- return 0;
+ }
+- /* There is not enough support do UPD LSO,
+- * so follow normal path
+- */
+- kfree_skb(skb);
+
+- return err;
++ return skb_append_datato_frags(sk, skb, getfrag, from,
++ (length - transhdrlen));
+ }
+
+ static inline struct ipv6_opt_hdr *ip6_opt_dup(struct ipv6_opt_hdr *src,
+@@ -1168,7 +1165,7 @@ int ip6_append_data(struct sock *sk, int getfrag(void *from, char *to,
+ if (WARN_ON(np->cork.opt))
+ return -EINVAL;
+
+- np->cork.opt = kmalloc(opt->tot_len, sk->sk_allocation);
++ np->cork.opt = kzalloc(opt->tot_len, sk->sk_allocation);
+ if (unlikely(np->cork.opt == NULL))
+ return -ENOBUFS;
+
+@@ -1258,18 +1255,20 @@ int ip6_append_data(struct sock *sk, int getfrag(void *from, char *to,
+ */
+
+ inet->cork.length += length;
+- if (((length > mtu) && (sk->sk_protocol == IPPROTO_UDP)) &&
++ skb = skb_peek_tail(&sk->sk_write_queue);
++ if (((length > mtu) ||
++ (skb && skb_has_frags(skb))) &&
++ (sk->sk_protocol == IPPROTO_UDP) &&
+ (rt->u.dst.dev->features & NETIF_F_UFO)) {
+-
+- err = ip6_ufo_append_data(sk, getfrag, from, length, hh_len,
+- fragheaderlen, transhdrlen, mtu,
+- flags, rt);
++ err = ip6_ufo_append_data(sk, getfrag, from, length,
++ hh_len, fragheaderlen,
++ transhdrlen, mtu, flags, rt);
+ if (err)
+ goto error;
+ return 0;
+ }
+
+- if ((skb = skb_peek_tail(&sk->sk_write_queue)) == NULL)
++ if (!skb)
+ goto alloc_new_skb;
+
+ while (length > 0) {
+diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c
+index f9fcf69..99ae9e3 100644
+--- a/net/ipv6/mcast.c
++++ b/net/ipv6/mcast.c
+@@ -2208,7 +2208,7 @@ static void mld_gq_timer_expire(unsigned long data)
+
+ idev->mc_gq_running = 0;
+ mld_send_report(idev, NULL);
+- __in6_dev_put(idev);
++ in6_dev_put(idev);
+ }
+
+ static void mld_ifc_timer_expire(unsigned long data)
+@@ -2221,7 +2221,7 @@ static void mld_ifc_timer_expire(unsigned long data)
+ if (idev->mc_ifc_count)
+ mld_ifc_start_timer(idev, idev->mc_maxdelay);
+ }
+- __in6_dev_put(idev);
++ in6_dev_put(idev);
+ }
+
+ static void mld_ifc_event(struct inet6_dev *idev)
+diff --git a/net/ipv6/ndisc.c b/net/ipv6/ndisc.c
+index f74e4e2..752da21 100644
+--- a/net/ipv6/ndisc.c
++++ b/net/ipv6/ndisc.c
+@@ -449,7 +449,6 @@ struct sk_buff *ndisc_build_skb(struct net_device *dev,
+ struct sk_buff *skb;
+ struct icmp6hdr *hdr;
+ int len;
+- int err;
+ u8 *opt;
+
+ if (!dev->addr_len)
+@@ -459,14 +458,12 @@ struct sk_buff *ndisc_build_skb(struct net_device *dev,
+ if (llinfo)
+ len += ndisc_opt_addr_space(dev);
+
+- skb = sock_alloc_send_skb(sk,
+- (MAX_HEADER + sizeof(struct ipv6hdr) +
+- len + LL_ALLOCATED_SPACE(dev)),
+- 1, &err);
++ skb = alloc_skb((MAX_HEADER + sizeof(struct ipv6hdr) +
++ len + LL_ALLOCATED_SPACE(dev)), GFP_ATOMIC);
+ if (!skb) {
+ ND_PRINTK0(KERN_ERR
+- "ICMPv6 ND: %s() failed to allocate an skb, err=%d.\n",
+- __func__, err);
++ "ICMPv6 ND: %s() failed to allocate an skb.\n",
++ __func__);
+ return NULL;
+ }
+
+@@ -494,6 +491,11 @@ struct sk_buff *ndisc_build_skb(struct net_device *dev,
+ csum_partial(hdr,
+ len, 0));
+
++ /* Manually assign socket ownership as we avoid calling
++ * sock_alloc_send_pskb() to bypass wmem buffer limits
++ */
++ skb_set_owner_w(skb, sk);
++
+ return skb;
+ }
+
+diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c
+index 4f24570..d5b09c7 100644
+--- a/net/ipv6/raw.c
++++ b/net/ipv6/raw.c
+@@ -456,11 +456,8 @@ static int rawv6_recvmsg(struct kiocb *iocb, struct sock *sk,
+ if (flags & MSG_OOB)
+ return -EOPNOTSUPP;
+
+- if (addr_len)
+- *addr_len=sizeof(*sin6);
+-
+ if (flags & MSG_ERRQUEUE)
+- return ipv6_recv_error(sk, msg, len);
++ return ipv6_recv_error(sk, msg, len, addr_len);
+
+ skb = skb_recv_datagram(sk, flags, noblock, &err);
+ if (!skb)
+@@ -495,6 +492,7 @@ static int rawv6_recvmsg(struct kiocb *iocb, struct sock *sk,
+ sin6->sin6_scope_id = 0;
+ if (ipv6_addr_type(&sin6->sin6_addr) & IPV6_ADDR_LINKLOCAL)
+ sin6->sin6_scope_id = IP6CB(skb)->iif;
++ *addr_len = sizeof(*sin6);
+ }
+
+ sock_recv_timestamp(msg, sk, skb);
+diff --git a/net/ipv6/reassembly.c b/net/ipv6/reassembly.c
+index 105de22..0c09d8e 100644
+--- a/net/ipv6/reassembly.c
++++ b/net/ipv6/reassembly.c
+@@ -503,6 +503,7 @@ static int ip6_frag_reasm(struct frag_queue *fq, struct sk_buff *prev,
+ head->tstamp = fq->q.stamp;
+ ipv6_hdr(head)->payload_len = htons(payload_len);
+ IP6CB(head)->nhoff = nhoff;
++ IP6CB(head)->flags |= IP6SKB_FRAGMENTED;
+
+ /* Yes, and fold redundant checksum back. 8) */
+ if (head->ip_summed == CHECKSUM_COMPLETE)
+@@ -537,6 +538,9 @@ static int ipv6_frag_rcv(struct sk_buff *skb)
+ struct ipv6hdr *hdr = ipv6_hdr(skb);
+ struct net *net = dev_net(skb_dst(skb)->dev);
+
++ if (IP6CB(skb)->flags & IP6SKB_FRAGMENTED)
++ goto fail_hdr;
++
+ IP6_INC_STATS_BH(net, ip6_dst_idev(skb_dst(skb)), IPSTATS_MIB_REASMREQDS);
+
+ /* Jumbo payload inhibits frag. header */
+@@ -557,6 +561,7 @@ static int ipv6_frag_rcv(struct sk_buff *skb)
+ ip6_dst_idev(skb_dst(skb)), IPSTATS_MIB_REASMOKS);
+
+ IP6CB(skb)->nhoff = (u8 *)fhdr - skb_network_header(skb);
++ IP6CB(skb)->flags |= IP6SKB_FRAGMENTED;
+ return 1;
+ }
+
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index e307517..5af0d1e 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -495,8 +495,11 @@ int rt6_route_rcv(struct net_device *dev, u8 *opt, int len,
+ prefix = &prefix_buf;
+ }
+
+- rt = rt6_get_route_info(net, prefix, rinfo->prefix_len, gwaddr,
+- dev->ifindex);
++ if (rinfo->prefix_len == 0)
++ rt = rt6_get_dflt_router(gwaddr, dev);
++ else
++ rt = rt6_get_route_info(net, prefix, rinfo->prefix_len,
++ gwaddr, dev->ifindex);
+
+ if (rt && !lifetime) {
+ ip6_del_rt(rt);
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index d8c0374..d0367eb 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -200,11 +200,8 @@ int udpv6_recvmsg(struct kiocb *iocb, struct sock *sk,
+ int is_udplite = IS_UDPLITE(sk);
+ int is_udp4;
+
+- if (addr_len)
+- *addr_len=sizeof(struct sockaddr_in6);
+-
+ if (flags & MSG_ERRQUEUE)
+- return ipv6_recv_error(sk, msg, len);
++ return ipv6_recv_error(sk, msg, len, addr_len);
+
+ try_again:
+ skb = __skb_recv_datagram(sk, flags | (noblock ? MSG_DONTWAIT : 0),
+@@ -273,7 +270,7 @@ try_again:
+ if (ipv6_addr_type(&sin6->sin6_addr) & IPV6_ADDR_LINKLOCAL)
+ sin6->sin6_scope_id = IP6CB(skb)->iif;
+ }
+-
++ *addr_len = sizeof(*sin6);
+ }
+ if (is_udp4) {
+ if (inet->cmsg_flags)
+@@ -690,11 +687,16 @@ static int udp_v6_push_pending_frames(struct sock *sk)
+ struct udphdr *uh;
+ struct udp_sock *up = udp_sk(sk);
+ struct inet_sock *inet = inet_sk(sk);
+- struct flowi *fl = &inet->cork.fl;
++ struct flowi *fl;
+ int err = 0;
+ int is_udplite = IS_UDPLITE(sk);
+ __wsum csum = 0;
+
++ if (up->pending == AF_INET)
++ return udp_push_pending_frames(sk);
++
++ fl = &inet->cork.fl;
++
+ /* Grab the skbuff where UDP header space exists. */
+ if ((skb = skb_peek(&sk->sk_write_queue)) == NULL)
+ goto out;
+diff --git a/net/ipx/af_ipx.c b/net/ipx/af_ipx.c
+index 66c7a20..25931b3 100644
+--- a/net/ipx/af_ipx.c
++++ b/net/ipx/af_ipx.c
+@@ -1808,8 +1808,6 @@ static int ipx_recvmsg(struct kiocb *iocb, struct socket *sock,
+ if (skb->tstamp.tv64)
+ sk->sk_stamp = skb->tstamp;
+
+- msg->msg_namelen = sizeof(*sipx);
+-
+ if (sipx) {
+ sipx->sipx_family = AF_IPX;
+ sipx->sipx_port = ipx->ipx_source.sock;
+@@ -1817,6 +1815,7 @@ static int ipx_recvmsg(struct kiocb *iocb, struct socket *sock,
+ sipx->sipx_network = IPX_SKB_CB(skb)->ipx_source_net;
+ sipx->sipx_type = ipx->ipx_type;
+ sipx->sipx_zero = 0;
++ msg->msg_namelen = sizeof(*sipx);
+ }
+ rc = copied;
+
+diff --git a/net/irda/af_irda.c b/net/irda/af_irda.c
+index bfb325d..7cb7613 100644
+--- a/net/irda/af_irda.c
++++ b/net/irda/af_irda.c
+@@ -1338,8 +1338,6 @@ static int irda_recvmsg_dgram(struct kiocb *iocb, struct socket *sock,
+ if ((err = sock_error(sk)) < 0)
+ return err;
+
+- msg->msg_namelen = 0;
+-
+ skb = skb_recv_datagram(sk, flags & ~MSG_DONTWAIT,
+ flags & MSG_DONTWAIT, &err);
+ if (!skb)
+@@ -1402,8 +1400,6 @@ static int irda_recvmsg_stream(struct kiocb *iocb, struct socket *sock,
+ target = sock_rcvlowat(sk, flags & MSG_WAITALL, size);
+ timeo = sock_rcvtimeo(sk, noblock);
+
+- msg->msg_namelen = 0;
+-
+ do {
+ int chunk;
+ struct sk_buff *skb = skb_dequeue(&sk->sk_receive_queue);
+diff --git a/net/iucv/af_iucv.c b/net/iucv/af_iucv.c
+index f605b23..bada1b9 100644
+--- a/net/iucv/af_iucv.c
++++ b/net/iucv/af_iucv.c
+@@ -1160,8 +1160,6 @@ static int iucv_sock_recvmsg(struct kiocb *iocb, struct socket *sock,
+ struct sk_buff *skb, *rskb, *cskb;
+ int err = 0;
+
+- msg->msg_namelen = 0;
+-
+ if ((sk->sk_state == IUCV_DISCONN || sk->sk_state == IUCV_SEVERED) &&
+ skb_queue_empty(&iucv->backlog_skb_q) &&
+ skb_queue_empty(&sk->sk_receive_queue) &&
+diff --git a/net/key/af_key.c b/net/key/af_key.c
+index 4e98193..3e5d0dc 100644
+--- a/net/key/af_key.c
++++ b/net/key/af_key.c
+@@ -1726,6 +1726,7 @@ static int key_notify_sa_flush(struct km_event *c)
+ hdr->sadb_msg_version = PF_KEY_V2;
+ hdr->sadb_msg_errno = (uint8_t) 0;
+ hdr->sadb_msg_len = (sizeof(struct sadb_msg) / sizeof(uint64_t));
++ hdr->sadb_msg_reserved = 0;
+
+ pfkey_broadcast(skb, GFP_ATOMIC, BROADCAST_ALL, NULL, c->net);
+
+@@ -2078,6 +2079,7 @@ static int pfkey_xfrm_policy2msg(struct sk_buff *skb, struct xfrm_policy *xp, in
+ pol->sadb_x_policy_type = IPSEC_POLICY_NONE;
+ }
+ pol->sadb_x_policy_dir = dir+1;
++ pol->sadb_x_policy_reserved = 0;
+ pol->sadb_x_policy_id = xp->index;
+ pol->sadb_x_policy_priority = xp->priority;
+
+@@ -2693,7 +2695,9 @@ static int key_notify_policy_flush(struct km_event *c)
+ hdr->sadb_msg_pid = c->pid;
+ hdr->sadb_msg_version = PF_KEY_V2;
+ hdr->sadb_msg_errno = (uint8_t) 0;
++ hdr->sadb_msg_satype = SADB_SATYPE_UNSPEC;
+ hdr->sadb_msg_len = (sizeof(struct sadb_msg) / sizeof(uint64_t));
++ hdr->sadb_msg_reserved = 0;
+ pfkey_broadcast(skb_out, GFP_ATOMIC, BROADCAST_ALL, NULL, c->net);
+ return 0;
+
+@@ -3108,7 +3112,9 @@ static int pfkey_send_acquire(struct xfrm_state *x, struct xfrm_tmpl *t, struct
+ pol->sadb_x_policy_exttype = SADB_X_EXT_POLICY;
+ pol->sadb_x_policy_type = IPSEC_POLICY_IPSEC;
+ pol->sadb_x_policy_dir = dir+1;
++ pol->sadb_x_policy_reserved = 0;
+ pol->sadb_x_policy_id = xp->index;
++ pol->sadb_x_policy_priority = xp->priority;
+
+ /* Set sadb_comb's. */
+ if (x->id.proto == IPPROTO_AH)
+@@ -3496,6 +3502,7 @@ static int pfkey_send_migrate(struct xfrm_selector *sel, u8 dir, u8 type,
+ pol->sadb_x_policy_exttype = SADB_X_EXT_POLICY;
+ pol->sadb_x_policy_type = IPSEC_POLICY_IPSEC;
+ pol->sadb_x_policy_dir = dir + 1;
++ pol->sadb_x_policy_reserved = 0;
+ pol->sadb_x_policy_id = 0;
+ pol->sadb_x_policy_priority = 0;
+
+@@ -3590,7 +3597,6 @@ static int pfkey_recvmsg(struct kiocb *kiocb,
+ if (flags & ~(MSG_PEEK|MSG_DONTWAIT|MSG_TRUNC|MSG_CMSG_COMPAT))
+ goto out;
+
+- msg->msg_namelen = 0;
+ skb = skb_recv_datagram(sk, flags, flags & MSG_DONTWAIT, &err);
+ if (skb == NULL)
+ goto out;
+diff --git a/net/llc/af_llc.c b/net/llc/af_llc.c
+index 8a814a5..f62b63e 100644
+--- a/net/llc/af_llc.c
++++ b/net/llc/af_llc.c
+@@ -669,13 +669,11 @@ static int llc_ui_recvmsg(struct kiocb *iocb, struct socket *sock,
+ struct llc_sock *llc = llc_sk(sk);
+ size_t copied = 0;
+ u32 peek_seq = 0;
+- u32 *seq;
++ u32 *seq, skb_len;
+ unsigned long used;
+ int target; /* Read at least this many bytes */
+ long timeo;
+
+- msg->msg_namelen = 0;
+-
+ lock_sock(sk);
+ copied = -ENOTCONN;
+ if (unlikely(sk->sk_type == SOCK_STREAM && sk->sk_state == TCP_LISTEN))
+@@ -769,6 +767,7 @@ static int llc_ui_recvmsg(struct kiocb *iocb, struct socket *sock,
+ }
+ continue;
+ found_ok_skb:
++ skb_len = skb->len;
+ /* Ok so how much can we use? */
+ used = skb->len - offset;
+ if (len < used)
+@@ -799,7 +798,7 @@ static int llc_ui_recvmsg(struct kiocb *iocb, struct socket *sock,
+ goto copy_uaddr;
+
+ /* Partial read */
+- if (used + offset < skb->len)
++ if (used + offset < skb_len)
+ continue;
+ } while (len > 0);
+
+diff --git a/net/netfilter/ipvs/ip_vs_proto_tcp.c b/net/netfilter/ipvs/ip_vs_proto_tcp.c
+index 91d28e0..d462b0d 100644
+--- a/net/netfilter/ipvs/ip_vs_proto_tcp.c
++++ b/net/netfilter/ipvs/ip_vs_proto_tcp.c
+@@ -147,15 +147,15 @@ tcp_partial_csum_update(int af, struct tcphdr *tcph,
+ #ifdef CONFIG_IP_VS_IPV6
+ if (af == AF_INET6)
+ tcph->check =
+- csum_fold(ip_vs_check_diff16(oldip->ip6, newip->ip6,
++ ~csum_fold(ip_vs_check_diff16(oldip->ip6, newip->ip6,
+ ip_vs_check_diff2(oldlen, newlen,
+- ~csum_unfold(tcph->check))));
++ csum_unfold(tcph->check))));
+ else
+ #endif
+ tcph->check =
+- csum_fold(ip_vs_check_diff4(oldip->ip, newip->ip,
++ ~csum_fold(ip_vs_check_diff4(oldip->ip, newip->ip,
+ ip_vs_check_diff2(oldlen, newlen,
+- ~csum_unfold(tcph->check))));
++ csum_unfold(tcph->check))));
+ }
+
+
+@@ -269,7 +269,7 @@ tcp_dnat_handler(struct sk_buff *skb,
+ * Adjust TCP checksums
+ */
+ if (skb->ip_summed == CHECKSUM_PARTIAL) {
+- tcp_partial_csum_update(cp->af, tcph, &cp->daddr, &cp->vaddr,
++ tcp_partial_csum_update(cp->af, tcph, &cp->vaddr, &cp->daddr,
+ htons(oldlen),
+ htons(skb->len - tcphoff));
+ } else if (!cp->app) {
+diff --git a/net/netfilter/ipvs/ip_vs_proto_udp.c b/net/netfilter/ipvs/ip_vs_proto_udp.c
+index e7a6885..c1781f5 100644
+--- a/net/netfilter/ipvs/ip_vs_proto_udp.c
++++ b/net/netfilter/ipvs/ip_vs_proto_udp.c
+@@ -154,15 +154,15 @@ udp_partial_csum_update(int af, struct udphdr *uhdr,
+ #ifdef CONFIG_IP_VS_IPV6
+ if (af == AF_INET6)
+ uhdr->check =
+- csum_fold(ip_vs_check_diff16(oldip->ip6, newip->ip6,
++ ~csum_fold(ip_vs_check_diff16(oldip->ip6, newip->ip6,
+ ip_vs_check_diff2(oldlen, newlen,
+- ~csum_unfold(uhdr->check))));
++ csum_unfold(uhdr->check))));
+ else
+ #endif
+ uhdr->check =
+- csum_fold(ip_vs_check_diff4(oldip->ip, newip->ip,
++ ~csum_fold(ip_vs_check_diff4(oldip->ip, newip->ip,
+ ip_vs_check_diff2(oldlen, newlen,
+- ~csum_unfold(uhdr->check))));
++ csum_unfold(uhdr->check))));
+ }
+
+
+@@ -205,7 +205,7 @@ udp_snat_handler(struct sk_buff *skb,
+ * Adjust UDP checksums
+ */
+ if (skb->ip_summed == CHECKSUM_PARTIAL) {
+- udp_partial_csum_update(cp->af, udph, &cp->daddr, &cp->vaddr,
++ udp_partial_csum_update(cp->af, udph, &cp->vaddr, &cp->daddr,
+ htons(oldlen),
+ htons(skb->len - udphoff));
+ } else if (!cp->app && (udph->check != 0)) {
+diff --git a/net/netfilter/nf_conntrack_proto_dccp.c b/net/netfilter/nf_conntrack_proto_dccp.c
+index 1b816a2..274e8a7 100644
+--- a/net/netfilter/nf_conntrack_proto_dccp.c
++++ b/net/netfilter/nf_conntrack_proto_dccp.c
+@@ -430,7 +430,7 @@ static bool dccp_new(struct nf_conn *ct, const struct sk_buff *skb,
+ const char *msg;
+ u_int8_t state;
+
+- dh = skb_header_pointer(skb, dataoff, sizeof(_dh), &dh);
++ dh = skb_header_pointer(skb, dataoff, sizeof(_dh), &_dh);
+ BUG_ON(dh == NULL);
+
+ state = dccp_state_table[CT_DCCP_ROLE_CLIENT][dh->dccph_type][CT_DCCP_NONE];
+@@ -479,7 +479,7 @@ static int dccp_packet(struct nf_conn *ct, const struct sk_buff *skb,
+ u_int8_t type, old_state, new_state;
+ enum ct_dccp_roles role;
+
+- dh = skb_header_pointer(skb, dataoff, sizeof(_dh), &dh);
++ dh = skb_header_pointer(skb, dataoff, sizeof(_dh), &_dh);
+ BUG_ON(dh == NULL);
+ type = dh->dccph_type;
+
+@@ -570,7 +570,7 @@ static int dccp_error(struct net *net, struct sk_buff *skb,
+ unsigned int cscov;
+ const char *msg;
+
+- dh = skb_header_pointer(skb, dataoff, sizeof(_dh), &dh);
++ dh = skb_header_pointer(skb, dataoff, sizeof(_dh), &_dh);
+ if (dh == NULL) {
+ msg = "nf_ct_dccp: short packet ";
+ goto out_invalid;
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index fc91ff6..39a6d5d 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -1400,8 +1400,6 @@ static int netlink_recvmsg(struct kiocb *kiocb, struct socket *sock,
+ }
+ #endif
+
+- msg->msg_namelen = 0;
+-
+ copied = data_skb->len;
+ if (len < copied) {
+ msg->msg_flags |= MSG_TRUNC;
+diff --git a/net/netrom/af_netrom.c b/net/netrom/af_netrom.c
+index 7a83495..ad1ec1b 100644
+--- a/net/netrom/af_netrom.c
++++ b/net/netrom/af_netrom.c
+@@ -1184,10 +1184,9 @@ static int nr_recvmsg(struct kiocb *iocb, struct socket *sock,
+ sax->sax25_family = AF_NETROM;
+ skb_copy_from_linear_data_offset(skb, 7, sax->sax25_call.ax25_call,
+ AX25_ADDR_LEN);
++ msg->msg_namelen = sizeof(*sax);
+ }
+
+- msg->msg_namelen = sizeof(*sax);
+-
+ skb_free_datagram(sk, skb);
+
+ release_sock(sk);
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 728c080..06707d0 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -1423,7 +1423,6 @@ static int packet_recvmsg(struct kiocb *iocb, struct socket *sock,
+ struct sock *sk = sock->sk;
+ struct sk_buff *skb;
+ int copied, err;
+- struct sockaddr_ll *sll;
+
+ err = -EINVAL;
+ if (flags & ~(MSG_PEEK|MSG_DONTWAIT|MSG_TRUNC|MSG_CMSG_COMPAT))
+@@ -1455,22 +1454,10 @@ static int packet_recvmsg(struct kiocb *iocb, struct socket *sock,
+ if (skb == NULL)
+ goto out;
+
+- /*
+- * If the address length field is there to be filled in, we fill
+- * it in now.
++ /* You lose any data beyond the buffer you gave. If it worries
++ * a user program they can ask the device for its MTU
++ * anyway.
+ */
+-
+- sll = &PACKET_SKB_CB(skb)->sa.ll;
+- if (sock->type == SOCK_PACKET)
+- msg->msg_namelen = sizeof(struct sockaddr_pkt);
+- else
+- msg->msg_namelen = sll->sll_halen + offsetof(struct sockaddr_ll, sll_addr);
+-
+- /*
+- * You lose any data beyond the buffer you gave. If it worries a
+- * user program they can ask the device for its MTU anyway.
+- */
+-
+ copied = skb->len;
+ if (copied > len) {
+ copied = len;
+@@ -1483,9 +1470,20 @@ static int packet_recvmsg(struct kiocb *iocb, struct socket *sock,
+
+ sock_recv_timestamp(msg, sk, skb);
+
+- if (msg->msg_name)
++ if (msg->msg_name) {
++ /* If the address length field is there to be filled
++ * in, we fill it in now.
++ */
++ if (sock->type == SOCK_PACKET) {
++ msg->msg_namelen = sizeof(struct sockaddr_pkt);
++ } else {
++ struct sockaddr_ll *sll = &PACKET_SKB_CB(skb)->sa.ll;
++ msg->msg_namelen = sll->sll_halen +
++ offsetof(struct sockaddr_ll, sll_addr);
++ }
+ memcpy(msg->msg_name, &PACKET_SKB_CB(skb)->sa,
+ msg->msg_namelen);
++ }
+
+ if (pkt_sk(sk)->auxdata) {
+ struct tpacket_auxdata aux;
+@@ -1525,12 +1523,12 @@ static int packet_getname_spkt(struct socket *sock, struct sockaddr *uaddr,
+ return -EOPNOTSUPP;
+
+ uaddr->sa_family = AF_PACKET;
++ memset(uaddr->sa_data, 0, sizeof(uaddr->sa_data));
+ dev = dev_get_by_index(sock_net(sk), pkt_sk(sk)->ifindex);
+ if (dev) {
+- strncpy(uaddr->sa_data, dev->name, 14);
++ strlcpy(uaddr->sa_data, dev->name, sizeof(uaddr->sa_data));
+ dev_put(dev);
+- } else
+- memset(uaddr->sa_data, 0, 14);
++ }
+ *uaddr_len = sizeof(*uaddr);
+
+ return 0;
+diff --git a/net/phonet/datagram.c b/net/phonet/datagram.c
+index ef5c75c..c88da73 100644
+--- a/net/phonet/datagram.c
++++ b/net/phonet/datagram.c
+@@ -122,9 +122,6 @@ static int pn_recvmsg(struct kiocb *iocb, struct sock *sk,
+ if (flags & MSG_OOB)
+ goto out_nofree;
+
+- if (addr_len)
+- *addr_len = sizeof(sa);
+-
+ skb = skb_recv_datagram(sk, flags, noblock, &rval);
+ if (skb == NULL)
+ goto out_nofree;
+@@ -145,8 +142,10 @@ static int pn_recvmsg(struct kiocb *iocb, struct sock *sk,
+
+ rval = (flags & MSG_TRUNC) ? skb->len : copylen;
+
+- if (msg->msg_name != NULL)
+- memcpy(msg->msg_name, &sa, sizeof(struct sockaddr_pn));
++ if (msg->msg_name != NULL) {
++ memcpy(msg->msg_name, &sa, sizeof(sa));
++ *addr_len = sizeof(sa);
++ }
+
+ out:
+ skb_free_datagram(sk, skb);
+diff --git a/net/rds/ib.c b/net/rds/ib.c
+index 536ebe5..5018f3d 100644
+--- a/net/rds/ib.c
++++ b/net/rds/ib.c
+@@ -235,7 +235,8 @@ static int rds_ib_laddr_check(__be32 addr)
+ ret = rdma_bind_addr(cm_id, (struct sockaddr *)&sin);
+ /* due to this, we will claim to support iWARP devices unless we
+ check node_type. */
+- if (ret || cm_id->device->node_type != RDMA_NODE_IB_CA)
++ if (ret || !cm_id->device ||
++ cm_id->device->node_type != RDMA_NODE_IB_CA)
+ ret = -EADDRNOTAVAIL;
+
+ rdsdebug("addr %pI4 ret %d node type %d\n",
+diff --git a/net/rds/iw.c b/net/rds/iw.c
+index db224f7..bff1e4b 100644
+--- a/net/rds/iw.c
++++ b/net/rds/iw.c
+@@ -237,7 +237,8 @@ static int rds_iw_laddr_check(__be32 addr)
+ ret = rdma_bind_addr(cm_id, (struct sockaddr *)&sin);
+ /* due to this, we will claim to support IB devices unless we
+ check node_type. */
+- if (ret || cm_id->device->node_type != RDMA_NODE_RNIC)
++ if (ret || !cm_id->device ||
++ cm_id->device->node_type != RDMA_NODE_RNIC)
+ ret = -EADDRNOTAVAIL;
+
+ rdsdebug("addr %pI4 ret %d node type %d\n",
+diff --git a/net/rds/recv.c b/net/rds/recv.c
+index c45a881c..a11cab9 100644
+--- a/net/rds/recv.c
++++ b/net/rds/recv.c
+@@ -410,8 +410,6 @@ int rds_recvmsg(struct kiocb *iocb, struct socket *sock, struct msghdr *msg,
+
+ rdsdebug("size %zu flags 0x%x timeo %ld\n", size, msg_flags, timeo);
+
+- msg->msg_namelen = 0;
+-
+ if (msg_flags & MSG_OOB)
+ goto out;
+
+diff --git a/net/rose/af_rose.c b/net/rose/af_rose.c
+index 2984999..7119ea6 100644
+--- a/net/rose/af_rose.c
++++ b/net/rose/af_rose.c
+@@ -1238,7 +1238,6 @@ static int rose_recvmsg(struct kiocb *iocb, struct socket *sock,
+ {
+ struct sock *sk = sock->sk;
+ struct rose_sock *rose = rose_sk(sk);
+- struct sockaddr_rose *srose = (struct sockaddr_rose *)msg->msg_name;
+ size_t copied;
+ unsigned char *asmptr;
+ struct sk_buff *skb;
+@@ -1274,24 +1273,19 @@ static int rose_recvmsg(struct kiocb *iocb, struct socket *sock,
+
+ skb_copy_datagram_iovec(skb, 0, msg->msg_iov, copied);
+
+- if (srose != NULL) {
+- memset(srose, 0, msg->msg_namelen);
++ if (msg->msg_name) {
++ struct sockaddr_rose *srose;
++ struct full_sockaddr_rose *full_srose = msg->msg_name;
++
++ memset(msg->msg_name, 0, sizeof(struct full_sockaddr_rose));
++ srose = msg->msg_name;
+ srose->srose_family = AF_ROSE;
+ srose->srose_addr = rose->dest_addr;
+ srose->srose_call = rose->dest_call;
+ srose->srose_ndigis = rose->dest_ndigis;
+- if (msg->msg_namelen >= sizeof(struct full_sockaddr_rose)) {
+- struct full_sockaddr_rose *full_srose = (struct full_sockaddr_rose *)msg->msg_name;
+- for (n = 0 ; n < rose->dest_ndigis ; n++)
+- full_srose->srose_digis[n] = rose->dest_digis[n];
+- msg->msg_namelen = sizeof(struct full_sockaddr_rose);
+- } else {
+- if (rose->dest_ndigis >= 1) {
+- srose->srose_ndigis = 1;
+- srose->srose_digi = rose->dest_digis[0];
+- }
+- msg->msg_namelen = sizeof(struct sockaddr_rose);
+- }
++ for (n = 0 ; n < rose->dest_ndigis ; n++)
++ full_srose->srose_digis[n] = rose->dest_digis[n];
++ msg->msg_namelen = sizeof(struct full_sockaddr_rose);
+ }
+
+ skb_free_datagram(sk, skb);
+diff --git a/net/rxrpc/ar-recvmsg.c b/net/rxrpc/ar-recvmsg.c
+index a39bf97..d5630d9 100644
+--- a/net/rxrpc/ar-recvmsg.c
++++ b/net/rxrpc/ar-recvmsg.c
+@@ -142,10 +142,13 @@ int rxrpc_recvmsg(struct kiocb *iocb, struct socket *sock,
+
+ /* copy the peer address and timestamp */
+ if (!continue_call) {
+- if (msg->msg_name && msg->msg_namelen > 0)
++ if (msg->msg_name) {
++ size_t len =
++ sizeof(call->conn->trans->peer->srx);
+ memcpy(msg->msg_name,
+- &call->conn->trans->peer->srx,
+- sizeof(call->conn->trans->peer->srx));
++ &call->conn->trans->peer->srx, len);
++ msg->msg_namelen = len;
++ }
+ sock_recv_timestamp(msg, &rx->sk, skb);
+ }
+
+diff --git a/net/sched/sch_atm.c b/net/sched/sch_atm.c
+index ab82f14..b022c59 100644
+--- a/net/sched/sch_atm.c
++++ b/net/sched/sch_atm.c
+@@ -628,6 +628,7 @@ static int atm_tc_dump_class(struct Qdisc *sch, unsigned long cl,
+ struct sockaddr_atmpvc pvc;
+ int state;
+
++ memset(&pvc, 0, sizeof(pvc));
+ pvc.sap_family = AF_ATMPVC;
+ pvc.sap_addr.itf = flow->vcc->dev ? flow->vcc->dev->number : -1;
+ pvc.sap_addr.vpi = flow->vcc->vpi;
+diff --git a/net/sched/sch_cbq.c b/net/sched/sch_cbq.c
+index 5b132c4..8b6f05d 100644
+--- a/net/sched/sch_cbq.c
++++ b/net/sched/sch_cbq.c
+@@ -1458,6 +1458,7 @@ static __inline__ int cbq_dump_wrr(struct sk_buff *skb, struct cbq_class *cl)
+ unsigned char *b = skb_tail_pointer(skb);
+ struct tc_cbq_wrropt opt;
+
++ memset(&opt, 0, sizeof(opt));
+ opt.flags = 0;
+ opt.allot = cl->allot;
+ opt.priority = cl->priority+1;
+diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c
+index 2f074d6..9ce5963 100644
+--- a/net/sched/sch_htb.c
++++ b/net/sched/sch_htb.c
+@@ -85,7 +85,7 @@ struct htb_class {
+ unsigned int children;
+ struct htb_class *parent; /* parent class */
+
+- int prio; /* these two are used only by leaves... */
++ u32 prio; /* these two are used only by leaves... */
+ int quantum; /* but stored for parent-to-leaf return */
+
+ union {
+diff --git a/net/sctp/output.c b/net/sctp/output.c
+index d494100..54bc011 100644
+--- a/net/sctp/output.c
++++ b/net/sctp/output.c
+@@ -506,7 +506,8 @@ int sctp_packet_transmit(struct sctp_packet *packet)
+ * by CRC32-C as described in <draft-ietf-tsvwg-sctpcsum-02.txt>.
+ */
+ if (!sctp_checksum_disable &&
+- !(dst->dev->features & (NETIF_F_NO_CSUM | NETIF_F_SCTP_CSUM))) {
++ (!(dst->dev->features & (NETIF_F_NO_CSUM | NETIF_F_SCTP_CSUM)) ||
++ (dst_xfrm(dst) != NULL) || packet->ipfragok)) {
+ __u32 crc32 = sctp_start_cksum((__u8 *)sh, cksum_buf_len);
+
+ /* 3) Put the resultant value into the checksum field in the
+diff --git a/net/sctp/outqueue.c b/net/sctp/outqueue.c
+index 23e5e97..bc423b4 100644
+--- a/net/sctp/outqueue.c
++++ b/net/sctp/outqueue.c
+@@ -203,6 +203,8 @@ static inline int sctp_cacc_skip(struct sctp_transport *primary,
+ */
+ void sctp_outq_init(struct sctp_association *asoc, struct sctp_outq *q)
+ {
++ memset(q, 0, sizeof(struct sctp_outq));
++
+ q->asoc = asoc;
+ INIT_LIST_HEAD(&q->out_chunk_list);
+ INIT_LIST_HEAD(&q->control_chunk_list);
+@@ -210,13 +212,7 @@ void sctp_outq_init(struct sctp_association *asoc, struct sctp_outq *q)
+ INIT_LIST_HEAD(&q->sacked);
+ INIT_LIST_HEAD(&q->abandoned);
+
+- q->fast_rtx = 0;
+- q->outstanding_bytes = 0;
+ q->empty = 1;
+- q->cork = 0;
+-
+- q->malloced = 0;
+- q->out_qlen = 0;
+ }
+
+ /* Free the outqueue structure and any related pending chunks.
+diff --git a/net/sctp/sm_make_chunk.c b/net/sctp/sm_make_chunk.c
+index feedee7..22d4ed8 100644
+--- a/net/sctp/sm_make_chunk.c
++++ b/net/sctp/sm_make_chunk.c
+@@ -1356,8 +1356,8 @@ static void sctp_chunk_destroy(struct sctp_chunk *chunk)
+ BUG_ON(!list_empty(&chunk->list));
+ list_del_init(&chunk->transmitted_list);
+
+- /* Free the chunk skb data and the SCTP_chunk stub itself. */
+- dev_kfree_skb(chunk->skb);
++ consume_skb(chunk->skb);
++ consume_skb(chunk->auth_chunk);
+
+ SCTP_DBG_OBJCNT_DEC(chunk);
+ kmem_cache_free(sctp_chunk_cachep, chunk);
+diff --git a/net/sctp/sm_sideeffect.c b/net/sctp/sm_sideeffect.c
+index ed742bf..9005d83 100644
+--- a/net/sctp/sm_sideeffect.c
++++ b/net/sctp/sm_sideeffect.c
+@@ -1676,6 +1676,11 @@ static int sctp_cmd_interpreter(sctp_event_t event_type,
+ case SCTP_CMD_SEND_NEXT_ASCONF:
+ sctp_cmd_send_asconf(asoc);
+ break;
++
++ case SCTP_CMD_SET_ASOC:
++ asoc = cmd->obj.asoc;
++ break;
++
+ default:
+ printk(KERN_WARNING "Impossible command: %u, %p\n",
+ cmd->verb, cmd->obj.ptr);
+diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c
+index 2f8e1c8..6da0171 100644
+--- a/net/sctp/sm_statefuns.c
++++ b/net/sctp/sm_statefuns.c
+@@ -745,6 +745,13 @@ sctp_disposition_t sctp_sf_do_5_1D_ce(const struct sctp_endpoint *ep,
+ struct sctp_chunk auth;
+ sctp_ierror_t ret;
+
++ /* Make sure that we and the peer are AUTH capable */
++ if (!sctp_auth_enable || !new_asoc->peer.auth_capable) {
++ kfree_skb(chunk->auth_chunk);
++ sctp_association_free(new_asoc);
++ return sctp_sf_pdiscard(ep, asoc, type, arg, commands);
++ }
++
+ /* set-up our fake chunk so that we can process it */
+ auth.skb = chunk->auth_chunk;
+ auth.asoc = chunk->asoc;
+@@ -755,10 +762,6 @@ sctp_disposition_t sctp_sf_do_5_1D_ce(const struct sctp_endpoint *ep,
+ auth.transport = chunk->transport;
+
+ ret = sctp_sf_authenticate(ep, new_asoc, type, &auth);
+-
+- /* We can now safely free the auth_chunk clone */
+- kfree_skb(chunk->auth_chunk);
+-
+ if (ret != SCTP_IERROR_NO_ERROR) {
+ sctp_association_free(new_asoc);
+ return sctp_sf_pdiscard(ep, asoc, type, arg, commands);
+@@ -2045,9 +2048,15 @@ sctp_disposition_t sctp_sf_do_5_2_4_dupcook(const struct sctp_endpoint *ep,
+ }
+
+ /* Delete the tempory new association. */
+- sctp_add_cmd_sf(commands, SCTP_CMD_NEW_ASOC, SCTP_ASOC(new_asoc));
++ sctp_add_cmd_sf(commands, SCTP_CMD_SET_ASOC, SCTP_ASOC(new_asoc));
+ sctp_add_cmd_sf(commands, SCTP_CMD_DELETE_TCB, SCTP_NULL());
+
++ /* Restore association pointer to provide SCTP command interpeter
++ * with a valid context in case it needs to manipulate
++ * the queues */
++ sctp_add_cmd_sf(commands, SCTP_CMD_SET_ASOC,
++ SCTP_ASOC((struct sctp_association *)asoc));
++
+ return retval;
+
+ nomem:
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 26ffae2..c26d905 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -67,6 +67,7 @@
+ #include <linux/poll.h>
+ #include <linux/init.h>
+ #include <linux/crypto.h>
++#include <linux/compat.h>
+
+ #include <net/ip.h>
+ #include <net/icmp.h>
+@@ -1284,11 +1285,19 @@ SCTP_STATIC int sctp_setsockopt_connectx(struct sock* sk,
+ /*
+ * New (hopefully final) interface for the API.
+ * We use the sctp_getaddrs_old structure so that use-space library
+- * can avoid any unnecessary allocations. The only defferent part
++ * can avoid any unnecessary allocations. The only different part
+ * is that we store the actual length of the address buffer into the
+- * addrs_num structure member. That way we can re-use the existing
++ * addrs_num structure member. That way we can re-use the existing
+ * code.
+ */
++#ifdef CONFIG_COMPAT
++struct compat_sctp_getaddrs_old {
++ sctp_assoc_t assoc_id;
++ s32 addr_num;
++ compat_uptr_t addrs; /* struct sockaddr * */
++};
++#endif
++
+ SCTP_STATIC int sctp_getsockopt_connectx3(struct sock* sk, int len,
+ char __user *optval,
+ int __user *optlen)
+@@ -1297,16 +1306,30 @@ SCTP_STATIC int sctp_getsockopt_connectx3(struct sock* sk, int len,
+ sctp_assoc_t assoc_id = 0;
+ int err = 0;
+
+- if (len < sizeof(param))
+- return -EINVAL;
++#ifdef CONFIG_COMPAT
++ if (is_compat_task()) {
++ struct compat_sctp_getaddrs_old param32;
+
+- if (copy_from_user(¶m, optval, sizeof(param)))
+- return -EFAULT;
++ if (len < sizeof(param32))
++ return -EINVAL;
++ if (copy_from_user(¶m32, optval, sizeof(param32)))
++ return -EFAULT;
+
+- err = __sctp_setsockopt_connectx(sk,
+- (struct sockaddr __user *)param.addrs,
+- param.addr_num, &assoc_id);
++ param.assoc_id = param32.assoc_id;
++ param.addr_num = param32.addr_num;
++ param.addrs = compat_ptr(param32.addrs);
++ } else
++#endif
++ {
++ if (len < sizeof(param))
++ return -EINVAL;
++ if (copy_from_user(¶m, optval, sizeof(param)))
++ return -EFAULT;
++ }
+
++ err = __sctp_setsockopt_connectx(sk, (struct sockaddr __user *)
++ param.addrs, param.addr_num,
++ &assoc_id);
+ if (err == 0 || err == -EINPROGRESS) {
+ if (copy_to_user(optval, &assoc_id, sizeof(assoc_id)))
+ return -EFAULT;
+@@ -3743,6 +3766,12 @@ SCTP_STATIC void sctp_destroy_sock(struct sock *sk)
+
+ /* Release our hold on the endpoint. */
+ ep = sctp_sk(sk)->ep;
++ /* This could happen during socket init, thus we bail out
++ * early, since the rest of the below is not setup either.
++ */
++ if (ep == NULL)
++ return;
++
+ sctp_endpoint_free(ep);
+ percpu_counter_dec(&sctp_sockets_allocated);
+ local_bh_disable();
+diff --git a/net/socket.c b/net/socket.c
+index bf9fc68..19671d8 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -216,12 +216,13 @@ int move_addr_to_user(struct sockaddr *kaddr, int klen, void __user *uaddr,
+ int err;
+ int len;
+
++ BUG_ON(klen > sizeof(struct sockaddr_storage));
+ err = get_user(len, ulen);
+ if (err)
+ return err;
+ if (len > klen)
+ len = klen;
+- if (len < 0 || len > sizeof(struct sockaddr_storage))
++ if (len < 0)
+ return -EINVAL;
+ if (len) {
+ if (audit_sockaddr(klen, kaddr))
+@@ -1744,8 +1745,10 @@ SYSCALL_DEFINE6(recvfrom, int, fd, void __user *, ubuf, size_t, size,
+ msg.msg_iov = &iov;
+ iov.iov_len = size;
+ iov.iov_base = ubuf;
+- msg.msg_name = (struct sockaddr *)&address;
+- msg.msg_namelen = sizeof(address);
++ /* Save some cycles and don't copy the address if not needed */
++ msg.msg_name = addr ? (struct sockaddr *)&address : NULL;
++ /* We assume all kernel code knows the size of sockaddr_storage */
++ msg.msg_namelen = 0;
+ if (sock->file->f_flags & O_NONBLOCK)
+ flags |= MSG_DONTWAIT;
+ err = sock_recvmsg(sock, &msg, size, flags);
+@@ -1863,6 +1866,20 @@ SYSCALL_DEFINE2(shutdown, int, fd, int, how)
+ #define COMPAT_NAMELEN(msg) COMPAT_MSG(msg, msg_namelen)
+ #define COMPAT_FLAGS(msg) COMPAT_MSG(msg, msg_flags)
+
++static int copy_msghdr_from_user(struct msghdr *kmsg,
++ struct msghdr __user *umsg)
++{
++ if (copy_from_user(kmsg, umsg, sizeof(struct msghdr)))
++ return -EFAULT;
++
++ if (kmsg->msg_namelen < 0)
++ return -EINVAL;
++
++ if (kmsg->msg_namelen > sizeof(struct sockaddr_storage))
++ kmsg->msg_namelen = sizeof(struct sockaddr_storage);
++ return 0;
++}
++
+ /*
+ * BSD sendmsg interface
+ */
+@@ -1887,8 +1904,11 @@ SYSCALL_DEFINE3(sendmsg, int, fd, struct msghdr __user *, msg, unsigned, flags)
+ if (get_compat_msghdr(&msg_sys, msg_compat))
+ return -EFAULT;
+ }
+- else if (copy_from_user(&msg_sys, msg, sizeof(struct msghdr)))
+- return -EFAULT;
++ else {
++ err = copy_msghdr_from_user(&msg_sys, msg);
++ if (err)
++ return err;
++ }
+
+ sock = sockfd_lookup_light(fd, &err, &fput_needed);
+ if (!sock)
+@@ -1997,8 +2017,11 @@ SYSCALL_DEFINE3(recvmsg, int, fd, struct msghdr __user *, msg,
+ if (get_compat_msghdr(&msg_sys, msg_compat))
+ return -EFAULT;
+ }
+- else if (copy_from_user(&msg_sys, msg, sizeof(struct msghdr)))
+- return -EFAULT;
++ else {
++ err = copy_msghdr_from_user(&msg_sys, msg);
++ if (err)
++ return err;
++ }
+
+ sock = sockfd_lookup_light(fd, &err, &fput_needed);
+ if (!sock)
+@@ -2017,18 +2040,16 @@ SYSCALL_DEFINE3(recvmsg, int, fd, struct msghdr __user *, msg,
+ goto out_put;
+ }
+
+- /*
+- * Save the user-mode address (verify_iovec will change the
+- * kernel msghdr to use the kernel address space)
++ /* Save the user-mode address (verify_iovec will change the
++ * kernel msghdr to use the kernel address space)
+ */
+-
+ uaddr = (__force void __user *)msg_sys.msg_name;
+ uaddr_len = COMPAT_NAMELEN(msg);
+- if (MSG_CMSG_COMPAT & flags) {
++ if (MSG_CMSG_COMPAT & flags)
+ err = verify_compat_iovec(&msg_sys, iov,
+ (struct sockaddr *)&addr,
+ VERIFY_WRITE);
+- } else
++ else
+ err = verify_iovec(&msg_sys, iov,
+ (struct sockaddr *)&addr,
+ VERIFY_WRITE);
+@@ -2039,6 +2060,9 @@ SYSCALL_DEFINE3(recvmsg, int, fd, struct msghdr __user *, msg,
+ cmsg_ptr = (unsigned long)msg_sys.msg_control;
+ msg_sys.msg_flags = flags & (MSG_CMSG_CLOEXEC|MSG_CMSG_COMPAT);
+
++ /* We assume all kernel code knows the size of sockaddr_storage */
++ msg_sys.msg_namelen = 0;
++
+ if (sock->file->f_flags & O_NONBLOCK)
+ flags |= MSG_DONTWAIT;
+ err = sock_recvmsg(sock, &msg_sys, total_len, flags);
+diff --git a/net/tipc/eth_media.c b/net/tipc/eth_media.c
+index 524ba56..22453a8 100644
+--- a/net/tipc/eth_media.c
++++ b/net/tipc/eth_media.c
+@@ -56,6 +56,7 @@ struct eth_bearer {
+ struct tipc_bearer *bearer;
+ struct net_device *dev;
+ struct packet_type tipc_packet_type;
++ struct work_struct setup;
+ };
+
+ static struct eth_bearer eth_bearers[MAX_ETH_BEARERS];
+@@ -122,6 +123,17 @@ static int recv_msg(struct sk_buff *buf, struct net_device *dev,
+ }
+
+ /**
++ * setup_bearer - setup association between Ethernet bearer and interface
++ */
++static void setup_bearer(struct work_struct *work)
++{
++ struct eth_bearer *eb_ptr =
++ container_of(work, struct eth_bearer, setup);
++
++ dev_add_pack(&eb_ptr->tipc_packet_type);
++}
++
++/**
+ * enable_bearer - attach TIPC bearer to an Ethernet interface
+ */
+
+@@ -157,7 +169,8 @@ static int enable_bearer(struct tipc_bearer *tb_ptr)
+ eb_ptr->tipc_packet_type.af_packet_priv = eb_ptr;
+ INIT_LIST_HEAD(&(eb_ptr->tipc_packet_type.list));
+ dev_hold(dev);
+- dev_add_pack(&eb_ptr->tipc_packet_type);
++ INIT_WORK(&eb_ptr->setup, setup_bearer);
++ schedule_work(&eb_ptr->setup);
+ }
+
+ /* Associate TIPC bearer with Ethernet bearer */
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index eccb86b..124f1a2 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -917,9 +917,6 @@ static int recv_msg(struct kiocb *iocb, struct socket *sock,
+ goto exit;
+ }
+
+- /* will be updated in set_orig_addr() if needed */
+- m->msg_namelen = 0;
+-
+ restart:
+
+ /* Look for a message in receive queue; wait if necessary */
+@@ -1053,9 +1050,6 @@ static int recv_stream(struct kiocb *iocb, struct socket *sock,
+ goto exit;
+ }
+
+- /* will be updated in set_orig_addr() if needed */
+- m->msg_namelen = 0;
+-
+ restart:
+
+ /* Look for a message in receive queue; wait if necessary */
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index d146b76..79c1dce 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -674,7 +674,9 @@ static int unix_autobind(struct socket *sock)
+ int err;
+ unsigned int retries = 0;
+
+- mutex_lock(&u->readlock);
++ err = mutex_lock_interruptible(&u->readlock);
++ if (err)
++ return err;
+
+ err = 0;
+ if (u->addr)
+@@ -806,7 +808,9 @@ static int unix_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ goto out;
+ addr_len = err;
+
+- mutex_lock(&u->readlock);
++ err = mutex_lock_interruptible(&u->readlock);
++ if (err)
++ goto out;
+
+ err = -EINVAL;
+ if (u->addr)
+@@ -1682,7 +1686,6 @@ static void unix_copy_addr(struct msghdr *msg, struct sock *sk)
+ {
+ struct unix_sock *u = unix_sk(sk);
+
+- msg->msg_namelen = 0;
+ if (u->addr) {
+ msg->msg_namelen = u->addr->len;
+ memcpy(msg->msg_name, u->addr->name, u->addr->len);
+@@ -1705,8 +1708,6 @@ static int unix_dgram_recvmsg(struct kiocb *iocb, struct socket *sock,
+ if (flags&MSG_OOB)
+ goto out;
+
+- msg->msg_namelen = 0;
+-
+ mutex_lock(&u->readlock);
+
+ skb = skb_recv_datagram(sk, flags, noblock, &err);
+@@ -1832,8 +1833,6 @@ static int unix_stream_recvmsg(struct kiocb *iocb, struct socket *sock,
+ target = sock_rcvlowat(sk, flags&MSG_WAITALL, size);
+ timeo = sock_rcvtimeo(sk, flags&MSG_DONTWAIT);
+
+- msg->msg_namelen = 0;
+-
+ /* Lock the socket to prevent queue disordering
+ * while sleeps in memcpy_tomsg
+ */
+diff --git a/net/x25/af_x25.c b/net/x25/af_x25.c
+index 2e9e300..40c447f 100644
+--- a/net/x25/af_x25.c
++++ b/net/x25/af_x25.c
+@@ -1294,10 +1294,9 @@ static int x25_recvmsg(struct kiocb *iocb, struct socket *sock,
+ if (sx25) {
+ sx25->sx25_family = AF_X25;
+ sx25->sx25_addr = x25->dest_addr;
++ msg->msg_namelen = sizeof(*sx25);
+ }
+
+- msg->msg_namelen = sizeof(struct sockaddr_x25);
+-
+ lock_sock(sk);
+ x25_check_rbuf(sk);
+ release_sock(sk);
+diff --git a/security/selinux/ss/services.c b/security/selinux/ss/services.c
+index ff17820..dee7177 100644
+--- a/security/selinux/ss/services.c
++++ b/security/selinux/ss/services.c
+@@ -1074,6 +1074,10 @@ static int security_context_to_sid_core(const char *scontext, u32 scontext_len,
+ struct context context;
+ int rc = 0;
+
++ /* An empty security context is never valid. */
++ if (!scontext_len)
++ return -EINVAL;
++
+ if (!ss_initialized) {
+ int i;
+
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 82b6fdc..3b9443b 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -1846,6 +1846,9 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, u32 id)
+ int r;
+ struct kvm_vcpu *vcpu, *v;
+
++ if (id >= KVM_MAX_VCPUS)
++ return -EINVAL;
++
+ vcpu = kvm_arch_vcpu_create(kvm, id);
+ if (IS_ERR(vcpu))
+ return PTR_ERR(vcpu);
Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/all/stable/2.6.32.63.patch
==============================================================================
--- /dev/null 00:00:00 1970 (empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/all/stable/2.6.32.63.patch Tue Nov 25 16:37:48 2014 (r22085)
@@ -0,0 +1,536 @@
+diff --git a/Makefile b/Makefile
+index 76c3b6c..0e35b32 100644
+diff --git a/include/linux/ethtool.h b/include/linux/ethtool.h
+index 7ffab7cb..3a0fae6 100644
+--- a/include/linux/ethtool.h
++++ b/include/linux/ethtool.h
+@@ -517,7 +517,9 @@ struct ethtool_ops {
+ #define ETHTOOL_GMSGLVL 0x00000007 /* Get driver message level */
+ #define ETHTOOL_SMSGLVL 0x00000008 /* Set driver msg level. */
+ #define ETHTOOL_NWAY_RST 0x00000009 /* Restart autonegotiation. */
+-#define ETHTOOL_GLINK 0x0000000a /* Get link status (ethtool_value) */
++/* Get link status for host, i.e. whether the interface *and* the
++ * physical port (if there is one) are up (ethtool_value). */
++#define ETHTOOL_GLINK 0x0000000a
+ #define ETHTOOL_GEEPROM 0x0000000b /* Get EEPROM data */
+ #define ETHTOOL_SEEPROM 0x0000000c /* Set EEPROM data. */
+ #define ETHTOOL_GCOALESCE 0x0000000e /* Get coalesce config */
+diff --git a/kernel/auditsc.c b/kernel/auditsc.c
+index 267e484..b6998ef 100644
+--- a/kernel/auditsc.c
++++ b/kernel/auditsc.c
+@@ -680,6 +680,22 @@ static enum audit_state audit_filter_task(struct task_struct *tsk, char **key)
+ return AUDIT_BUILD_CONTEXT;
+ }
+
++static int audit_in_mask(const struct audit_krule *rule, unsigned long val)
++{
++ int word, bit;
++
++ if (val > 0xffffffff)
++ return false;
++
++ word = AUDIT_WORD(val);
++ if (word >= AUDIT_BITMASK_SIZE)
++ return false;
++
++ bit = AUDIT_BIT(val);
++
++ return rule->mask[word] & bit;
++}
++
+ /* At syscall entry and exit time, this filter is called if the
+ * audit_state is not low enough that auditing cannot take place, but is
+ * also not high enough that we already know we have to write an audit
+@@ -697,11 +713,8 @@ static enum audit_state audit_filter_syscall(struct task_struct *tsk,
+
+ rcu_read_lock();
+ if (!list_empty(list)) {
+- int word = AUDIT_WORD(ctx->major);
+- int bit = AUDIT_BIT(ctx->major);
+-
+ list_for_each_entry_rcu(e, list, list) {
+- if ((e->rule.mask[word] & bit) == bit &&
++ if (audit_in_mask(&e->rule, ctx->major) &&
+ audit_filter_rules(tsk, &e->rule, ctx, NULL,
+ &state)) {
+ rcu_read_unlock();
+@@ -730,8 +743,6 @@ void audit_filter_inodes(struct task_struct *tsk, struct audit_context *ctx)
+
+ rcu_read_lock();
+ for (i = 0; i < ctx->name_count; i++) {
+- int word = AUDIT_WORD(ctx->major);
+- int bit = AUDIT_BIT(ctx->major);
+ struct audit_names *n = &ctx->names[i];
+ int h = audit_hash_ino((u32)n->ino);
+ struct list_head *list = &audit_inode_hash[h];
+@@ -740,7 +751,7 @@ void audit_filter_inodes(struct task_struct *tsk, struct audit_context *ctx)
+ continue;
+
+ list_for_each_entry_rcu(e, list, list) {
+- if ((e->rule.mask[word] & bit) == bit &&
++ if (audit_in_mask(&e->rule, ctx->major) &&
+ audit_filter_rules(tsk, &e->rule, ctx, n, &state)) {
+ rcu_read_unlock();
+ ctx->current_state = state;
+diff --git a/kernel/futex.c b/kernel/futex.c
+index 9c5ffe1..55dd3d2 100644
+--- a/kernel/futex.c
++++ b/kernel/futex.c
+@@ -536,6 +536,55 @@ void exit_pi_state_list(struct task_struct *curr)
+ spin_unlock_irq(&curr->pi_lock);
+ }
+
++/*
++ * We need to check the following states:
++ *
++ * Waiter | pi_state | pi->owner | uTID | uODIED | ?
++ *
++ * [1] NULL | --- | --- | 0 | 0/1 | Valid
++ * [2] NULL | --- | --- | >0 | 0/1 | Valid
++ *
++ * [3] Found | NULL | -- | Any | 0/1 | Invalid
++ *
++ * [4] Found | Found | NULL | 0 | 1 | Valid
++ * [5] Found | Found | NULL | >0 | 1 | Invalid
++ *
++ * [6] Found | Found | task | 0 | 1 | Valid
++ *
++ * [7] Found | Found | NULL | Any | 0 | Invalid
++ *
++ * [8] Found | Found | task | ==taskTID | 0/1 | Valid
++ * [9] Found | Found | task | 0 | 0 | Invalid
++ * [10] Found | Found | task | !=taskTID | 0/1 | Invalid
++ *
++ * [1] Indicates that the kernel can acquire the futex atomically. We
++ * came came here due to a stale FUTEX_WAITERS/FUTEX_OWNER_DIED bit.
++ *
++ * [2] Valid, if TID does not belong to a kernel thread. If no matching
++ * thread is found then it indicates that the owner TID has died.
++ *
++ * [3] Invalid. The waiter is queued on a non PI futex
++ *
++ * [4] Valid state after exit_robust_list(), which sets the user space
++ * value to FUTEX_WAITERS | FUTEX_OWNER_DIED.
++ *
++ * [5] The user space value got manipulated between exit_robust_list()
++ * and exit_pi_state_list()
++ *
++ * [6] Valid state after exit_pi_state_list() which sets the new owner in
++ * the pi_state but cannot access the user space value.
++ *
++ * [7] pi_state->owner can only be NULL when the OWNER_DIED bit is set.
++ *
++ * [8] Owner and user space value match
++ *
++ * [9] There is no transient state which sets the user space TID to 0
++ * except exit_robust_list(), but this is indicated by the
++ * FUTEX_OWNER_DIED bit. See [4]
++ *
++ * [10] There is no transient state which leaves owner and user space
++ * TID out of sync.
++ */
+ static int
+ lookup_pi_state(u32 uval, struct futex_hash_bucket *hb,
+ union futex_key *key, struct futex_pi_state **ps)
+@@ -551,12 +600,13 @@ lookup_pi_state(u32 uval, struct futex_hash_bucket *hb,
+ plist_for_each_entry_safe(this, next, head, list) {
+ if (match_futex(&this->key, key)) {
+ /*
+- * Another waiter already exists - bump up
+- * the refcount and return its pi_state:
++ * Sanity check the waiter before increasing
++ * the refcount and attaching to it.
+ */
+ pi_state = this->pi_state;
+ /*
+- * Userspace might have messed up non PI and PI futexes
++ * Userspace might have messed up non-PI and
++ * PI futexes [3]
+ */
+ if (unlikely(!pi_state))
+ return -EINVAL;
+@@ -564,34 +614,70 @@ lookup_pi_state(u32 uval, struct futex_hash_bucket *hb,
+ WARN_ON(!atomic_read(&pi_state->refcount));
+
+ /*
+- * When pi_state->owner is NULL then the owner died
+- * and another waiter is on the fly. pi_state->owner
+- * is fixed up by the task which acquires
+- * pi_state->rt_mutex.
+- *
+- * We do not check for pid == 0 which can happen when
+- * the owner died and robust_list_exit() cleared the
+- * TID.
++ * Handle the owner died case:
+ */
+- if (pid && pi_state->owner) {
++ if (uval & FUTEX_OWNER_DIED) {
+ /*
+- * Bail out if user space manipulated the
+- * futex value.
++ * exit_pi_state_list sets owner to NULL and
++ * wakes the topmost waiter. The task which
++ * acquires the pi_state->rt_mutex will fixup
++ * owner.
+ */
+- if (pid != task_pid_vnr(pi_state->owner))
++ if (!pi_state->owner) {
++ /*
++ * No pi state owner, but the user
++ * space TID is not 0. Inconsistent
++ * state. [5]
++ */
++ if (pid)
++ return -EINVAL;
++ /*
++ * Take a ref on the state and
++ * return. [4]
++ */
++ goto out_state;
++ }
++
++ /*
++ * If TID is 0, then either the dying owner
++ * has not yet executed exit_pi_state_list()
++ * or some waiter acquired the rtmutex in the
++ * pi state, but did not yet fixup the TID in
++ * user space.
++ *
++ * Take a ref on the state and return. [6]
++ */
++ if (!pid)
++ goto out_state;
++ } else {
++ /*
++ * If the owner died bit is not set,
++ * then the pi_state must have an
++ * owner. [7]
++ */
++ if (!pi_state->owner)
+ return -EINVAL;
+ }
+
++ /*
++ * Bail out if user space manipulated the
++ * futex value. If pi state exists then the
++ * owner TID must be the same as the user
++ * space TID. [9/10]
++ */
++ if (pid != task_pid_vnr(pi_state->owner))
++ return -EINVAL;
++
++ out_state:
+ atomic_inc(&pi_state->refcount);
+ *ps = pi_state;
+-
+ return 0;
+ }
+ }
+
+ /*
+ * We are the first waiter - try to look up the real owner and attach
+- * the new pi_state to it, but bail out when TID = 0
++ * the new pi_state to it, but bail out when TID = 0 [1]
+ */
+ if (!pid)
+ return -ESRCH;
+@@ -599,6 +685,11 @@ lookup_pi_state(u32 uval, struct futex_hash_bucket *hb,
+ if (!p)
+ return -ESRCH;
+
++ if (!p->mm) {
++ put_task_struct(p);
++ return -EPERM;
++ }
++
+ /*
+ * We need to look at the task state flags to figure out,
+ * whether the task is exiting. To protect against the do_exit
+@@ -619,6 +710,9 @@ lookup_pi_state(u32 uval, struct futex_hash_bucket *hb,
+ return ret;
+ }
+
++ /*
++ * No existing pi state. First waiter. [2]
++ */
+ pi_state = alloc_pi_state();
+
+ /*
+@@ -692,10 +786,18 @@ retry:
+ return -EDEADLK;
+
+ /*
+- * Surprise - we got the lock. Just return to userspace:
++ * Surprise - we got the lock, but we do not trust user space at all.
+ */
+- if (unlikely(!curval))
+- return 1;
++ if (unlikely(!curval)) {
++ /*
++ * We verify whether there is kernel state for this
++ * futex. If not, we can safely assume, that the 0 ->
++ * TID transition is correct. If state exists, we do
++ * not bother to fixup the user space state as it was
++ * corrupted already.
++ */
++ return futex_top_waiter(hb, key) ? -EINVAL : 1;
++ }
+
+ uval = curval;
+
+@@ -803,6 +905,7 @@ static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_q *this)
+ struct task_struct *new_owner;
+ struct futex_pi_state *pi_state = this->pi_state;
+ u32 curval, newval;
++ int ret = 0;
+
+ if (!pi_state)
+ return -EINVAL;
+@@ -827,25 +930,21 @@ static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_q *this)
+ new_owner = this->task;
+
+ /*
+- * We pass it to the next owner. (The WAITERS bit is always
+- * kept enabled while there is PI state around. We must also
+- * preserve the owner died bit.)
++ * We pass it to the next owner. The WAITERS bit is always
++ * kept enabled while there is PI state around. We cleanup the
++ * owner died bit, because we are the owner.
+ */
+- if (!(uval & FUTEX_OWNER_DIED)) {
+- int ret = 0;
++ newval = FUTEX_WAITERS | task_pid_vnr(new_owner);
+
+- newval = FUTEX_WAITERS | task_pid_vnr(new_owner);
+-
+- curval = cmpxchg_futex_value_locked(uaddr, uval, newval);
++ curval = cmpxchg_futex_value_locked(uaddr, uval, newval);
+
+- if (curval == -EFAULT)
+- ret = -EFAULT;
+- else if (curval != uval)
+- ret = -EINVAL;
+- if (ret) {
+- spin_unlock(&pi_state->pi_mutex.wait_lock);
+- return ret;
+- }
++ if (curval == -EFAULT)
++ ret = -EFAULT;
++ else if (curval != uval)
++ ret = -EINVAL;
++ if (ret) {
++ spin_unlock(&pi_state->pi_mutex.wait_lock);
++ return ret;
+ }
+
+ spin_lock_irq(&pi_state->owner->pi_lock);
+@@ -1122,8 +1221,8 @@ void requeue_pi_wake_futex(struct futex_q *q, union futex_key *key,
+ * hb1 and hb2 must be held by the caller.
+ *
+ * Returns:
+- * 0 - failed to acquire the lock atomicly
+- * 1 - acquired the lock
++ * 0 - failed to acquire the lock atomically;
++ * >0 - acquired the lock, return value is vpid of the top_waiter
+ * <0 - error
+ */
+ static int futex_proxy_trylock_atomic(u32 __user *pifutex,
+@@ -1134,7 +1233,7 @@ static int futex_proxy_trylock_atomic(u32 __user *pifutex,
+ {
+ struct futex_q *top_waiter = NULL;
+ u32 curval;
+- int ret;
++ int ret, vpid;
+
+ if (get_futex_value_locked(&curval, pifutex))
+ return -EFAULT;
+@@ -1162,11 +1261,13 @@ static int futex_proxy_trylock_atomic(u32 __user *pifutex,
+ * the contended case or if set_waiters is 1. The pi_state is returned
+ * in ps in contended cases.
+ */
++ vpid = task_pid_vnr(top_waiter->task);
+ ret = futex_lock_pi_atomic(pifutex, hb2, key2, ps, top_waiter->task,
+ set_waiters);
+- if (ret == 1)
++ if (ret == 1) {
+ requeue_pi_wake_futex(top_waiter, key2, hb2);
+-
++ return vpid;
++ }
+ return ret;
+ }
+
+@@ -1196,10 +1297,16 @@ static int futex_requeue(u32 __user *uaddr1, int fshared, u32 __user *uaddr2,
+ struct futex_hash_bucket *hb1, *hb2;
+ struct plist_head *head1;
+ struct futex_q *this, *next;
+- u32 curval2;
+
+ if (requeue_pi) {
+ /*
++ * Requeue PI only works on two distinct uaddrs. This
++ * check is only valid for private futexes. See below.
++ */
++ if (uaddr1 == uaddr2)
++ return -EINVAL;
++
++ /*
+ * requeue_pi requires a pi_state, try to allocate it now
+ * without any locks in case it fails.
+ */
+@@ -1237,6 +1344,15 @@ retry:
+ if (unlikely(ret != 0))
+ goto out_put_key1;
+
++ /*
++ * The check above which compares uaddrs is not sufficient for
++ * shared futexes. We need to compare the keys:
++ */
++ if (requeue_pi && match_futex(&key1, &key2)) {
++ ret = -EINVAL;
++ goto out_put_keys;
++ }
++
+ hb1 = hash_futex(&key1);
+ hb2 = hash_futex(&key2);
+
+@@ -1282,16 +1398,25 @@ retry_private:
+ * At this point the top_waiter has either taken uaddr2 or is
+ * waiting on it. If the former, then the pi_state will not
+ * exist yet, look it up one more time to ensure we have a
+- * reference to it.
++ * reference to it. If the lock was taken, ret contains the
++ * vpid of the top waiter task.
+ */
+- if (ret == 1) {
++ if (ret > 0) {
+ WARN_ON(pi_state);
+ drop_count++;
+ task_count++;
+- ret = get_futex_value_locked(&curval2, uaddr2);
+- if (!ret)
+- ret = lookup_pi_state(curval2, hb2, &key2,
+- &pi_state);
++ /*
++ * If we acquired the lock, then the user
++ * space value of uaddr2 should be vpid. It
++ * cannot be changed by the top waiter as it
++ * is blocked on hb2 lock if it tries to do
++ * so. If something fiddled with it behind our
++ * back the pi state lookup might unearth
++ * it. So we rather use the known value than
++ * rereading and handing potential crap to
++ * lookup_pi_state.
++ */
++ ret = lookup_pi_state(ret, hb2, &key2, &pi_state);
+ }
+
+ switch (ret) {
+@@ -2083,9 +2208,10 @@ retry:
+ /*
+ * To avoid races, try to do the TID -> 0 atomic transition
+ * again. If it succeeds then we can return without waking
+- * anyone else up:
++ * anyone else up. We only try this if neither the waiters nor
++ * the owner died bit are set.
+ */
+- if (!(uval & FUTEX_OWNER_DIED))
++ if (!(uval & ~FUTEX_TID_MASK))
+ uval = cmpxchg_futex_value_locked(uaddr, task_pid_vnr(current), 0);
+
+
+@@ -2120,11 +2246,9 @@ retry:
+ /*
+ * No waiters - kernel unlocks the futex:
+ */
+- if (!(uval & FUTEX_OWNER_DIED)) {
+- ret = unlock_futex_pi(uaddr, uval);
+- if (ret == -EFAULT)
+- goto pi_faulted;
+- }
++ ret = unlock_futex_pi(uaddr, uval);
++ if (ret == -EFAULT)
++ goto pi_faulted;
+
+ out_unlock:
+ spin_unlock(&hb->lock);
+@@ -2285,6 +2409,15 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, int fshared,
+ if (ret)
+ goto out_key2;
+
++ /*
++ * The check above which compares uaddrs is not sufficient for
++ * shared futexes. We need to compare the keys:
++ */
++ if (match_futex(&q.key, &key2)) {
++ ret = -EINVAL;
++ goto out_put_keys;
++ }
++
+ /* Queue the futex_q, drop the hb lock, wait for wakeup. */
+ futex_wait_queue_me(hb, &q, to);
+
+diff --git a/net/core/ethtool.c b/net/core/ethtool.c
+index abbe8fa..f9e7179 100644
+--- a/net/core/ethtool.c
++++ b/net/core/ethtool.c
+@@ -365,6 +365,20 @@ static int ethtool_nway_reset(struct net_device *dev)
+ return dev->ethtool_ops->nway_reset(dev);
+ }
+
++static int ethtool_get_link(struct net_device *dev, char __user *useraddr)
++{
++ struct ethtool_value edata = { .cmd = ETHTOOL_GLINK };
++
++ if (!dev->ethtool_ops->get_link)
++ return -EOPNOTSUPP;
++
++ edata.data = netif_running(dev) && dev->ethtool_ops->get_link(dev);
++
++ if (copy_to_user(useraddr, &edata, sizeof(edata)))
++ return -EFAULT;
++ return 0;
++}
++
+ static int ethtool_get_eeprom(struct net_device *dev, void __user *useraddr)
+ {
+ struct ethtool_eeprom eeprom;
+@@ -1016,8 +1030,7 @@ int dev_ethtool(struct net *net, struct ifreq *ifr)
+ rc = ethtool_nway_reset(dev);
+ break;
+ case ETHTOOL_GLINK:
+- rc = ethtool_get_value(dev, useraddr, ethcmd,
+- dev->ethtool_ops->get_link);
++ rc = ethtool_get_link(dev, useraddr);
+ break;
+ case ETHTOOL_GEEPROM:
+ rc = ethtool_get_eeprom(dev, useraddr);
+diff --git a/net/core/sysctl_net_core.c b/net/core/sysctl_net_core.c
+index e2eaf29..e6bf72c 100644
+--- a/net/core/sysctl_net_core.c
++++ b/net/core/sysctl_net_core.c
+@@ -121,7 +121,8 @@ static struct ctl_table netns_core_table[] = {
+ .mode = 0644,
+ .extra1 = &zero,
+ .extra2 = &ushort_max,
+- .proc_handler = proc_dointvec_minmax
++ .proc_handler = proc_dointvec_minmax,
++ .strategy = &sysctl_intvec
+ },
+ { .ctl_name = 0 }
+ };
+diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
+index 910fa54..d957371 100644
+--- a/net/ipv4/sysctl_net_ipv4.c
++++ b/net/ipv4/sysctl_net_ipv4.c
+@@ -241,7 +241,8 @@ static struct ctl_table ipv4_table[] = {
+ .mode = 0644,
+ .proc_handler = proc_dointvec_minmax,
+ .extra1 = &tcp_syn_retries_min,
+- .extra2 = &tcp_syn_retries_max
++ .extra2 = &tcp_syn_retries_max,
++ .strategy = &sysctl_intvec
+ },
+ {
+ .ctl_name = NET_IPV4_NONLOCAL_BIND,
Modified: dists/squeeze-security/linux-2.6/debian/patches/series/48squeeze9
==============================================================================
--- dists/squeeze-security/linux-2.6/debian/patches/series/48squeeze9 Tue Nov 25 13:48:40 2014 (r22084)
+++ dists/squeeze-security/linux-2.6/debian/patches/series/48squeeze9 Tue Nov 25 16:37:48 2014 (r22085)
@@ -1,4 +1,89 @@
+# Drop patches included in 2.6.32.61..2.6.32.64
+- bugfix/x86/x86-Don-t-use-the-EFI-reboot-method-by-default.patch
+- bugfix/x86/msr-add-capabilities-check.patch
+- bugfix/x86/KVM-x86-relax-MSR_KVM_SYSTEM_TIME-alignment-check.patch
+- bugfix/x86/KVM-x86-fix-for-buffer-overflow-in-handling-of-MSR_K.patch
+- bugfix/x86/KVM-x86-invalid-opcode-oops-on-SET_SREGS-with-OSXSAV.patch
+- bugfix/ia64/revert-pcdp-use-early_ioremap-early_iounmap-to-acces.patch
+- bugfix/all/kernel-signal.c-use-__ARCH_HAS_SA_RESTORER-instead-o.patch
+- bugfix/all/signal-Define-__ARCH_HAS_SA_RESTORER-so-we-know-whet.patch
+- bugfix/all/signal-stop-infoleak-via-tkill-and-tgkill-signals.patch
+- bugfix/all/signal-always-clear-sa_restorer-on-execve.patch
+- bugfix/all/exec-use-ELOOP-for-max-recursion-depth.patch
+- bugfix/all/exec-do-not-leave-bprm-interp-on-stack.patch
+- bugfix/all/USB-cdc-wdm-fix-buffer-overflow.patch
+- bugfix/all/USB-io_ti-Fix-Null-dereference-in-chase-port.patch
+- bugfix/all/fs-compat_ioctl.c-VIDEO_SET_SPU_PALETTE-missing-erro.patch
+- bugfix/all/net-fix-info-leak-in-compat-dev_ifconf.patch
+- bugfix/all/ext4-Fix-max-file-size-and-logical-block-counting-of-extent-format-file.patch
+- bugfix/all/ext4-avoid-hang-when-mounting-non-journal-filesystem.patch
+- bugfix/all/ext4-make-orphan-functions-be-no-op-in-no-journal-mo.patch
+- bugfix/all/fat-Fix-stat-f_namelen.patch
+- bugfix/all/isofs-avoid-info-leak-on-export.patch
+- debian/nls-Avoid-ABI-change-for-CVE-2013-1773-fix.patch
+- bugfix/all/NLS-improve-UTF8-UTF16-string-conversion-routine.patch
+- bugfix/all/ext4-AIO-vs-fallocate-stale-data-exposure.patch
+- bugfix/all/udf-avoid-info-leak-on-export.patch
+- bugfix/all/usermodehelper-____call_usermodehelper-doesnt-need-do_exit.patch
+- bugfix/all/usermodehelper-implement-UMH_KILLABLE.patch
+- bugfix/all/usermodehelper-introduce-umh_complete.patch
+- bugfix/all/fix-ptrace-when-task-is-in-task_is_stopped-state.patch
+- bugfix/all/ptrace-ensure-arch_ptrace-ptrace_request-can-never-race-with-SIGKILL.patch
+- bugfix/all/ptrace-introduce-signal_wake_up_state-and-ptrace_signal_wake_up.patch
+- bugfix/all/ptrace-ptrace_resume-shouldnt-wake-up-TASK_TRACED-thread.patch
+- bugfix/all/mm-fix-vma_resv_map-NULL-pointer.patch
+- bugfix/all/hugetlb-fix-resv_map-leak-in-error-path.patch
+- bugfix/all/inet-add-RCU-protection-to-inet-opt.patch
+- bugfix/all/ipv6-discard-overlapping-fragment.patch
+- bugfix/all/CVE-2013-4470.patch
+- bugfix/all/CVE-2013-4387.patch
+- bugfix/all/ipv6-ipv6_sk_dst_check_must-not-assume-ipv6-dst.patch
+# TODO: the following patch had to be disabled to be able to unapply
+# ipv6-make-fragment-identifications-less-predictable.patch but the
+# patch is not part of the new upstream releases. Investigate whether
+# it must be updated (under a new name) and reapplied.
+- bugfix/all/ipv6-fix-NULL-dereference-in-udp6_ufo_fragment.patch
+- bugfix/all/ipv6-make-fragment-identifications-less-predictable.patch
+- bugfix/all/kmod-make-__request_module-killable.patch
+- bugfix/all/kmod-introduce-call_modprobe-helper.patch
+- bugfix/all/wake_up_process-should-be-never-used-to-wakeup-a-TASK_STOPPED-TRACED-task.patch
+- bugfix/all/revert-time-avoid-making-adjustments-if-we-haven-t.patch
+- bugfix/all/tmpfs-fix-use-after-free-of-mempolicy-object.patch
+- bugfix/all/ax25-fix-info-leak-via-msg_name-in-ax25_recvmsg.patch
+- bugfix/all/atm-fix-info-leak-in-getsockopt-SO_ATMPVC.patch
+- bugfix/all/atm-fix-info-leak-via-getsockname.patch
+- bugfix/all/atm-update-msg_namelen-in-vcc_recvmsg.patch
+- bugfix/all/Bluetooth-fix-possible-info-leak-in-bt_sock_recvmsg.patch
+- bugfix/all/Bluetooth-L2CAP-Fix-info-leak-via-getsockname.patch
+- bugfix/all/Bluetooth-RFCOMM-Fix-missing-msg_namelen-update-in-r.patch
+- bugfix/all/Bluetooth-RFCOMM-Fix-info-leak-via-getsockname.patch (END)
+- bugfix/all/Bluetooth-HCI-Fix-info-leak-in-getsockopt-HCI_FILTER.patch
+- bugfix/all/Bluetooth-Fix-incorrect-strncpy-in-hidp_setup_hid.patch
+- bugfix/all/dcbnl-fix-various-netlink-info-leaks.patch
+- bugfix/all/net-fix-divide-by-zero-in-tcp-algorithm-illinois.patch
+- bugfix/all/irda-Fix-missing-msg_namelen-update-in-irda_recvmsg_.patch
+- bugfix/all/iucv-Fix-missing-msg_namelen-update-in-iucv_sock_rec.patch
+- bugfix/all/llc-Fix-missing-msg_namelen-update-in-llc_ui_recvmsg.patch
+- bugfix/all/llc-fix-info-leak-via-getsockname.patch
+- bugfix/all/ipvs-fix-info-leak-in-getsockopt-IP_VS_SO_GET_TIMEOU.patch
+- bugfix/all/rds-set-correct-msg_namelen.patch
+- bugfix/all/rose-fix-info-leak-via-msg_name-in-rose_recvmsg.patch
+- bugfix/all/kernel-panic-when-mount-NFSv4.patch
+- bugfix/all/tipc-fix-info-leaks-via-msg_name-in-recv_msg-recv_st.patch
+- bugfix/all/xfrm_user-return-error-pointer-instead-of-NULL-2.patch
+- bugfix/all/xfrm_user-return-error-pointer-instead-of-NULL.patch
+- bugfix/all/xfrm_user-fix-info-leak-in-copy_to_user_tmpl.patch
+- bugfix/all/xfrm_user-fix-info-leak-in-copy_to_user_policy.patch
+- bugfix/all/xfrm_user-fix-info-leak-in-copy_to_user_state.patch
+- bugfix/all/keys-fix-race-with-concurrent-install_user_keyrings.patch
+- bugfix/all/KVM-Fix-bounds-checking-in-ioapic-indirect-register-.patch
+# Add upstream patches
++ bugfix/all/stable/2.6.32.61.patch
++ bugfix/all/stable/2.6.32.62.patch
++ bugfix/all/stable/2.6.32.63.patch
++ bugfix/all/stable/2.6.32.64.patch
+# Reinstate the no ABI change patch
++ debian/nls-Avoid-ABI-change-for-CVE-2013-1773-fix.patch
+ bugfix/all/CVE-2014-4653.patch
+ bugfix/all/CVE-2014-4654+4655.patch
+ bugfix/all/CVE-2014-4943.patch
-+ bugfix/all/stable/2.6.32.64.patch
More information about the Kernel-svn-changes
mailing list