[kernel] r16253 - in dists/sid/linux-2.6/debian: . patches/bugfix/all patches/debian patches/series
Ben Hutchings
benh at alioth.debian.org
Thu Sep 9 03:19:21 UTC 2010
Author: benh
Date: Thu Sep 9 03:19:15 2010
New Revision: 16253
Log:
net/{tcp,udp,llc,sctp,tipc,x25}: Add limit for socket backlog (Closes: #576838)
Added:
dists/sid/linux-2.6/debian/patches/bugfix/all/llc-use-limited-socket-backlog.patch
dists/sid/linux-2.6/debian/patches/bugfix/all/net-add-limit-for-socket-backlog.patch
dists/sid/linux-2.6/debian/patches/bugfix/all/sctp-use-limited-socket-backlog.patch
dists/sid/linux-2.6/debian/patches/bugfix/all/tcp-use-limited-socket-backlog.patch
dists/sid/linux-2.6/debian/patches/bugfix/all/tipc-use-limited-socket-backlog.patch
dists/sid/linux-2.6/debian/patches/bugfix/all/udp-use-limited-socket-backlog.patch
dists/sid/linux-2.6/debian/patches/bugfix/all/x25-use-limited-socket-backlog.patch
dists/sid/linux-2.6/debian/patches/debian/net-Avoid-ABI-change-from-limit-for-socket-backlog.patch
Modified:
dists/sid/linux-2.6/debian/changelog
dists/sid/linux-2.6/debian/patches/series/22
Modified: dists/sid/linux-2.6/debian/changelog
==============================================================================
--- dists/sid/linux-2.6/debian/changelog Thu Sep 9 02:34:10 2010 (r16252)
+++ dists/sid/linux-2.6/debian/changelog Thu Sep 9 03:19:15 2010 (r16253)
@@ -42,6 +42,8 @@
2.6.32.16; reverted due to a regression which was addressed in 2.6.32.19)
* sched, cputime: Introduce thread_group_times() (from 2.6.32.19; reverted
due to the potential ABI change which we now carefully avoid)
+ * net/{tcp,udp,llc,sctp,tipc,x25}: Add limit for socket backlog
+ (Closes: #576838)
[ Bastian Blank ]
* Use Breaks instead of Conflicts.
Added: dists/sid/linux-2.6/debian/patches/bugfix/all/llc-use-limited-socket-backlog.patch
==============================================================================
--- /dev/null 00:00:00 1970 (empty, because file is newly added)
+++ dists/sid/linux-2.6/debian/patches/bugfix/all/llc-use-limited-socket-backlog.patch Thu Sep 9 03:19:15 2010 (r16253)
@@ -0,0 +1,36 @@
+From 5048af9dfdac6e8503002e7b4f363f17bab7835c Mon Sep 17 00:00:00 2001
+From: Zhu Yi <yi.zhu at intel.com>
+Date: Thu, 4 Mar 2010 18:01:43 +0000
+Subject: [PATCH 4/8] llc: use limited socket backlog
+
+[ Upstream commit 79545b681961d7001c1f4c3eb9ffb87bed4485db ]
+
+Make llc adapt to the limited socket backlog change.
+
+Cc: Arnaldo Carvalho de Melo <acme at ghostprotocols.net>
+Signed-off-by: Zhu Yi <yi.zhu at intel.com>
+Acked-by: Eric Dumazet <eric.dumazet at gmail.com>
+Acked-by: Arnaldo Carvalho de Melo <acme at redhat.com>
+Signed-off-by: David S. Miller <davem at davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh at suse.de>
+---
+ net/llc/llc_conn.c | 3 ++-
+ 1 files changed, 2 insertions(+), 1 deletions(-)
+
+diff --git a/net/llc/llc_conn.c b/net/llc/llc_conn.c
+index c6bab39..8f97546 100644
+--- a/net/llc/llc_conn.c
++++ b/net/llc/llc_conn.c
+@@ -756,7 +756,8 @@ void llc_conn_handler(struct llc_sap *sap, struct sk_buff *skb)
+ else {
+ dprintk("%s: adding to backlog...\n", __func__);
+ llc_set_backlog_type(skb, LLC_PACKET);
+- sk_add_backlog(sk, skb);
++ if (sk_add_backlog_limited(sk, skb))
++ goto drop_unlock;
+ }
+ out:
+ bh_unlock_sock(sk);
+--
+1.7.1
+
Added: dists/sid/linux-2.6/debian/patches/bugfix/all/net-add-limit-for-socket-backlog.patch
==============================================================================
--- /dev/null 00:00:00 1970 (empty, because file is newly added)
+++ dists/sid/linux-2.6/debian/patches/bugfix/all/net-add-limit-for-socket-backlog.patch Thu Sep 9 03:19:15 2010 (r16253)
@@ -0,0 +1,139 @@
+From d17139a8b5feba09a45e3b5fcf76e901d93f978d Mon Sep 17 00:00:00 2001
+From: Zhu Yi <yi.zhu at intel.com>
+Date: Thu, 4 Mar 2010 18:01:40 +0000
+Subject: [PATCH 1/8] net: add limit for socket backlog
+
+[ Upstream commit 8eae939f1400326b06d0c9afe53d2a484a326871 ]
+
+We got system OOM while running some UDP netperf testing on the loopback
+device. The case is multiple senders sent stream UDP packets to a single
+receiver via loopback on local host. Of course, the receiver is not able
+to handle all the packets in time. But we surprisingly found that these
+packets were not discarded due to the receiver's sk->sk_rcvbuf limit.
+Instead, they are kept queuing to sk->sk_backlog and finally ate up all
+the memory. We believe this is a secure hole that a none privileged user
+can crash the system.
+
+The root cause for this problem is, when the receiver is doing
+__release_sock() (i.e. after userspace recv, kernel udp_recvmsg ->
+skb_free_datagram_locked -> release_sock), it moves skbs from backlog to
+sk_receive_queue with the softirq enabled. In the above case, multiple
+busy senders will almost make it an endless loop. The skbs in the
+backlog end up eat all the system memory.
+
+The issue is not only for UDP. Any protocols using socket backlog is
+potentially affected. The patch adds limit for socket backlog so that
+the backlog size cannot be expanded endlessly.
+
+Reported-by: Alex Shi <alex.shi at intel.com>
+Cc: David Miller <davem at davemloft.net>
+Cc: Arnaldo Carvalho de Melo <acme at ghostprotocols.net>
+Cc: Alexey Kuznetsov <kuznet at ms2.inr.ac.ru>
+Cc: "Pekka Savola (ipv6)" <pekkas at netcore.fi>
+Cc: Patrick McHardy <kaber at trash.net>
+Cc: Vlad Yasevich <vladislav.yasevich at hp.com>
+Cc: Sridhar Samudrala <sri at us.ibm.com>
+Cc: Jon Maloy <jon.maloy at ericsson.com>
+Cc: Allan Stephens <allan.stephens at windriver.com>
+Cc: Andrew Hendry <andrew.hendry at gmail.com>
+Signed-off-by: Zhu Yi <yi.zhu at intel.com>
+Signed-off-by: Eric Dumazet <eric.dumazet at gmail.com>
+Acked-by: Arnaldo Carvalho de Melo <acme at redhat.com>
+Signed-off-by: David S. Miller <davem at davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh at suse.de>
+---
+ include/net/sock.h | 15 ++++++++++++++-
+ net/core/sock.c | 16 ++++++++++++++--
+ 2 files changed, 28 insertions(+), 3 deletions(-)
+
+diff --git a/include/net/sock.h b/include/net/sock.h
+index eecd369..d04a1ab 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -242,6 +242,8 @@ struct sock {
+ struct {
+ struct sk_buff *head;
+ struct sk_buff *tail;
++ int len;
++ int limit;
+ } sk_backlog;
+ wait_queue_head_t *sk_sleep;
+ struct dst_entry *sk_dst_cache;
+@@ -561,7 +563,7 @@ static inline int sk_stream_memory_free(struct sock *sk)
+ return sk->sk_wmem_queued < sk->sk_sndbuf;
+ }
+
+-/* The per-socket spinlock must be held here. */
++/* OOB backlog add */
+ static inline void sk_add_backlog(struct sock *sk, struct sk_buff *skb)
+ {
+ if (!sk->sk_backlog.tail) {
+@@ -573,6 +575,17 @@ static inline void sk_add_backlog(struct sock *sk, struct sk_buff *skb)
+ skb->next = NULL;
+ }
+
++/* The per-socket spinlock must be held here. */
++static inline int sk_add_backlog_limited(struct sock *sk, struct sk_buff *skb)
++{
++ if (sk->sk_backlog.len >= max(sk->sk_backlog.limit, sk->sk_rcvbuf << 1))
++ return -ENOBUFS;
++
++ sk_add_backlog(sk, skb);
++ sk->sk_backlog.len += skb->truesize;
++ return 0;
++}
++
+ static inline int sk_backlog_rcv(struct sock *sk, struct sk_buff *skb)
+ {
+ return sk->sk_backlog_rcv(sk, skb);
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 6605e75..5797dab 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -336,8 +336,12 @@ int sk_receive_skb(struct sock *sk, struct sk_buff *skb, const int nested)
+ rc = sk_backlog_rcv(sk, skb);
+
+ mutex_release(&sk->sk_lock.dep_map, 1, _RET_IP_);
+- } else
+- sk_add_backlog(sk, skb);
++ } else if (sk_add_backlog_limited(sk, skb)) {
++ bh_unlock_sock(sk);
++ atomic_inc(&sk->sk_drops);
++ goto discard_and_relse;
++ }
++
+ bh_unlock_sock(sk);
+ out:
+ sock_put(sk);
+@@ -1114,6 +1118,7 @@ struct sock *sk_clone(const struct sock *sk, const gfp_t priority)
+ sock_lock_init(newsk);
+ bh_lock_sock(newsk);
+ newsk->sk_backlog.head = newsk->sk_backlog.tail = NULL;
++ newsk->sk_backlog.len = 0;
+
+ atomic_set(&newsk->sk_rmem_alloc, 0);
+ /*
+@@ -1517,6 +1522,12 @@ static void __release_sock(struct sock *sk)
+
+ bh_lock_sock(sk);
+ } while ((skb = sk->sk_backlog.head) != NULL);
++
++ /*
++ * Doing the zeroing here guarantee we can not loop forever
++ * while a wild producer attempts to flood us.
++ */
++ sk->sk_backlog.len = 0;
+ }
+
+ /**
+@@ -1849,6 +1860,7 @@ void sock_init_data(struct socket *sock, struct sock *sk)
+ sk->sk_allocation = GFP_KERNEL;
+ sk->sk_rcvbuf = sysctl_rmem_default;
+ sk->sk_sndbuf = sysctl_wmem_default;
++ sk->sk_backlog.limit = sk->sk_rcvbuf << 1;
+ sk->sk_state = TCP_CLOSE;
+ sk_set_socket(sk, sock);
+
+--
+1.7.1
+
Added: dists/sid/linux-2.6/debian/patches/bugfix/all/sctp-use-limited-socket-backlog.patch
==============================================================================
--- /dev/null 00:00:00 1970 (empty, because file is newly added)
+++ dists/sid/linux-2.6/debian/patches/bugfix/all/sctp-use-limited-socket-backlog.patch Thu Sep 9 03:19:15 2010 (r16253)
@@ -0,0 +1,116 @@
+From b72211f8f76f2107af95ab8df5f81e8dffae9a09 Mon Sep 17 00:00:00 2001
+From: Zhu Yi <yi.zhu at intel.com>
+Date: Thu, 4 Mar 2010 18:01:44 +0000
+Subject: [PATCH 5/8] sctp: use limited socket backlog
+
+[ Upstream commit 50b1a782f845140f4138f14a1ce8a4a6dd0cc82f ]
+
+Make sctp adapt to the limited socket backlog change.
+
+Cc: Vlad Yasevich <vladislav.yasevich at hp.com>
+Cc: Sridhar Samudrala <sri at us.ibm.com>
+Signed-off-by: Zhu Yi <yi.zhu at intel.com>
+Signed-off-by: David S. Miller <davem at davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh at suse.de>
+---
+ net/sctp/input.c | 42 +++++++++++++++++++++++++++---------------
+ net/sctp/socket.c | 3 +++
+ 2 files changed, 30 insertions(+), 15 deletions(-)
+
+diff --git a/net/sctp/input.c b/net/sctp/input.c
+index c0c973e..cbc0636 100644
+--- a/net/sctp/input.c
++++ b/net/sctp/input.c
+@@ -75,7 +75,7 @@ static struct sctp_association *__sctp_lookup_association(
+ const union sctp_addr *peer,
+ struct sctp_transport **pt);
+
+-static void sctp_add_backlog(struct sock *sk, struct sk_buff *skb);
++static int sctp_add_backlog(struct sock *sk, struct sk_buff *skb);
+
+
+ /* Calculate the SCTP checksum of an SCTP packet. */
+@@ -265,8 +265,13 @@ int sctp_rcv(struct sk_buff *skb)
+ }
+
+ if (sock_owned_by_user(sk)) {
++ if (sctp_add_backlog(sk, skb)) {
++ sctp_bh_unlock_sock(sk);
++ sctp_chunk_free(chunk);
++ skb = NULL; /* sctp_chunk_free already freed the skb */
++ goto discard_release;
++ }
+ SCTP_INC_STATS_BH(SCTP_MIB_IN_PKT_BACKLOG);
+- sctp_add_backlog(sk, skb);
+ } else {
+ SCTP_INC_STATS_BH(SCTP_MIB_IN_PKT_SOFTIRQ);
+ sctp_inq_push(&chunk->rcvr->inqueue, chunk);
+@@ -336,8 +341,10 @@ int sctp_backlog_rcv(struct sock *sk, struct sk_buff *skb)
+ sctp_bh_lock_sock(sk);
+
+ if (sock_owned_by_user(sk)) {
+- sk_add_backlog(sk, skb);
+- backloged = 1;
++ if (sk_add_backlog_limited(sk, skb))
++ sctp_chunk_free(chunk);
++ else
++ backloged = 1;
+ } else
+ sctp_inq_push(inqueue, chunk);
+
+@@ -362,22 +369,27 @@ done:
+ return 0;
+ }
+
+-static void sctp_add_backlog(struct sock *sk, struct sk_buff *skb)
++static int sctp_add_backlog(struct sock *sk, struct sk_buff *skb)
+ {
+ struct sctp_chunk *chunk = SCTP_INPUT_CB(skb)->chunk;
+ struct sctp_ep_common *rcvr = chunk->rcvr;
++ int ret;
+
+- /* Hold the assoc/ep while hanging on the backlog queue.
+- * This way, we know structures we need will not disappear from us
+- */
+- if (SCTP_EP_TYPE_ASSOCIATION == rcvr->type)
+- sctp_association_hold(sctp_assoc(rcvr));
+- else if (SCTP_EP_TYPE_SOCKET == rcvr->type)
+- sctp_endpoint_hold(sctp_ep(rcvr));
+- else
+- BUG();
++ ret = sk_add_backlog_limited(sk, skb);
++ if (!ret) {
++ /* Hold the assoc/ep while hanging on the backlog queue.
++ * This way, we know structures we need will not disappear
++ * from us
++ */
++ if (SCTP_EP_TYPE_ASSOCIATION == rcvr->type)
++ sctp_association_hold(sctp_assoc(rcvr));
++ else if (SCTP_EP_TYPE_SOCKET == rcvr->type)
++ sctp_endpoint_hold(sctp_ep(rcvr));
++ else
++ BUG();
++ }
++ return ret;
+
+- sk_add_backlog(sk, skb);
+ }
+
+ /* Handle icmp frag needed error. */
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 3a95fcb..374dfe5 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -3719,6 +3719,9 @@ SCTP_STATIC int sctp_init_sock(struct sock *sk)
+ SCTP_DBG_OBJCNT_INC(sock);
+ percpu_counter_inc(&sctp_sockets_allocated);
+
++ /* Set socket backlog limit. */
++ sk->sk_backlog.limit = sysctl_sctp_rmem[1];
++
+ local_bh_disable();
+ sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);
+ local_bh_enable();
+--
+1.7.1
+
Added: dists/sid/linux-2.6/debian/patches/bugfix/all/tcp-use-limited-socket-backlog.patch
==============================================================================
--- /dev/null 00:00:00 1970 (empty, because file is newly added)
+++ dists/sid/linux-2.6/debian/patches/bugfix/all/tcp-use-limited-socket-backlog.patch Thu Sep 9 03:19:15 2010 (r16253)
@@ -0,0 +1,59 @@
+From 72358003bc516706eae62c61a44d8a7227edde2d Mon Sep 17 00:00:00 2001
+From: Zhu Yi <yi.zhu at intel.com>
+Date: Thu, 4 Mar 2010 18:01:41 +0000
+Subject: [PATCH 2/8] tcp: use limited socket backlog
+
+[ Upstream commit 6b03a53a5ab7ccf2d5d69f96cf1c739c4d2a8fb9 ]
+
+Make tcp adapt to the limited socket backlog change.
+
+Cc: "David S. Miller" <davem at davemloft.net>
+Cc: Alexey Kuznetsov <kuznet at ms2.inr.ac.ru>
+Cc: "Pekka Savola (ipv6)" <pekkas at netcore.fi>
+Cc: Patrick McHardy <kaber at trash.net>
+Signed-off-by: Zhu Yi <yi.zhu at intel.com>
+Acked-by: Eric Dumazet <eric.dumazet at gmail.com>
+Signed-off-by: David S. Miller <davem at davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh at suse.de>
+---
+ net/ipv4/tcp_ipv4.c | 6 ++++--
+ net/ipv6/tcp_ipv6.c | 6 ++++--
+ 2 files changed, 8 insertions(+), 4 deletions(-)
+
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 7cda24b..ea69003 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -1634,8 +1634,10 @@ process:
+ if (!tcp_prequeue(sk, skb))
+ ret = tcp_v4_do_rcv(sk, skb);
+ }
+- } else
+- sk_add_backlog(sk, skb);
++ } else if (sk_add_backlog_limited(sk, skb)) {
++ bh_unlock_sock(sk);
++ goto discard_and_relse;
++ }
+ bh_unlock_sock(sk);
+
+ sock_put(sk);
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index 21d100b..a46a0f8 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -1685,8 +1685,10 @@ process:
+ if (!tcp_prequeue(sk, skb))
+ ret = tcp_v6_do_rcv(sk, skb);
+ }
+- } else
+- sk_add_backlog(sk, skb);
++ } else if (sk_add_backlog_limited(sk, skb)) {
++ bh_unlock_sock(sk);
++ goto discard_and_relse;
++ }
+ bh_unlock_sock(sk);
+
+ sock_put(sk);
+--
+1.7.1
+
Added: dists/sid/linux-2.6/debian/patches/bugfix/all/tipc-use-limited-socket-backlog.patch
==============================================================================
--- /dev/null 00:00:00 1970 (empty, because file is newly added)
+++ dists/sid/linux-2.6/debian/patches/bugfix/all/tipc-use-limited-socket-backlog.patch Thu Sep 9 03:19:15 2010 (r16253)
@@ -0,0 +1,40 @@
+From f72fa7d52e70672b7084579cd975c90640bc372f Mon Sep 17 00:00:00 2001
+From: Zhu Yi <yi.zhu at intel.com>
+Date: Thu, 4 Mar 2010 18:01:45 +0000
+Subject: [PATCH 6/8] tipc: use limited socket backlog
+
+[ Upstream commit 53eecb1be5ae499d399d2923933937a9ea1a284f ]
+
+Make tipc adapt to the limited socket backlog change.
+
+Cc: Jon Maloy <jon.maloy at ericsson.com>
+Cc: Allan Stephens <allan.stephens at windriver.com>
+Signed-off-by: Zhu Yi <yi.zhu at intel.com>
+Acked-by: Eric Dumazet <eric.dumazet at gmail.com>
+Acked-by: Allan Stephens <allan.stephens at windriver.com>
+Signed-off-by: David S. Miller <davem at davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh at suse.de>
+---
+ net/tipc/socket.c | 6 ++++--
+ 1 files changed, 4 insertions(+), 2 deletions(-)
+
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index e6d9abf..d71804a 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -1322,8 +1322,10 @@ static u32 dispatch(struct tipc_port *tport, struct sk_buff *buf)
+ if (!sock_owned_by_user(sk)) {
+ res = filter_rcv(sk, buf);
+ } else {
+- sk_add_backlog(sk, buf);
+- res = TIPC_OK;
++ if (sk_add_backlog_limited(sk, buf))
++ res = TIPC_ERR_OVERLOAD;
++ else
++ res = TIPC_OK;
+ }
+ bh_unlock_sock(sk);
+
+--
+1.7.1
+
Added: dists/sid/linux-2.6/debian/patches/bugfix/all/udp-use-limited-socket-backlog.patch
==============================================================================
--- /dev/null 00:00:00 1970 (empty, because file is newly added)
+++ dists/sid/linux-2.6/debian/patches/bugfix/all/udp-use-limited-socket-backlog.patch Thu Sep 9 03:19:15 2010 (r16253)
@@ -0,0 +1,87 @@
+From b5b16a9c0599b0ff34fa807342a8ca3200f342fc Mon Sep 17 00:00:00 2001
+From: Zhu Yi <yi.zhu at intel.com>
+Date: Thu, 9 Sep 2010 03:38:07 +0100
+Subject: [PATCH 3/8] udp: use limited socket backlog
+
+[ Upstream commit 55349790d7cbf0d381873a7ece1dcafcffd4aaa9 ]
+
+Make udp adapt to the limited socket backlog change.
+
+Cc: "David S. Miller" <davem at davemloft.net>
+Cc: Alexey Kuznetsov <kuznet at ms2.inr.ac.ru>
+Cc: "Pekka Savola (ipv6)" <pekkas at netcore.fi>
+Cc: Patrick McHardy <kaber at trash.net>
+Signed-off-by: Zhu Yi <yi.zhu at intel.com>
+Acked-by: Eric Dumazet <eric.dumazet at gmail.com>
+Signed-off-by: David S. Miller <davem at davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh at suse.de>
+[bwh: Backport to 2.6.32]
+---
+ net/ipv4/udp.c | 6 ++++--
+ net/ipv6/udp.c | 20 ++++++++++++++------
+ 2 files changed, 18 insertions(+), 8 deletions(-)
+
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index c322f44..0ea57b1 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -1174,8 +1174,10 @@ int udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
+ bh_lock_sock(sk);
+ if (!sock_owned_by_user(sk))
+ rc = __udp_queue_rcv_skb(sk, skb);
+- else
+- sk_add_backlog(sk, skb);
++ else if (sk_add_backlog_limited(sk, skb)) {
++ bh_unlock_sock(sk);
++ goto drop;
++ }
+ bh_unlock_sock(sk);
+
+ return rc;
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index cf538ed..154dd6b 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -470,16 +470,20 @@ static int __udp6_lib_mcast_deliver(struct net *net, struct sk_buff *skb,
+ bh_lock_sock(sk2);
+ if (!sock_owned_by_user(sk2))
+ udpv6_queue_rcv_skb(sk2, buff);
+- else
+- sk_add_backlog(sk2, buff);
++ else if (sk_add_backlog_limited(sk2, buff)) {
++ atomic_inc(&sk2->sk_drops);
++ kfree_skb(buff);
++ }
+ bh_unlock_sock(sk2);
+ }
+ }
+ bh_lock_sock(sk);
+ if (!sock_owned_by_user(sk))
+ udpv6_queue_rcv_skb(sk, skb);
+- else
+- sk_add_backlog(sk, skb);
++ else if (sk_add_backlog_limited(sk, skb)) {
++ atomic_inc(&sk->sk_drops);
++ kfree_skb(skb);
++ }
+ bh_unlock_sock(sk);
+ out:
+ spin_unlock(&hslot->lock);
+@@ -598,8 +602,12 @@ int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
+ bh_lock_sock(sk);
+ if (!sock_owned_by_user(sk))
+ udpv6_queue_rcv_skb(sk, skb);
+- else
+- sk_add_backlog(sk, skb);
++ else if (sk_add_backlog_limited(sk, skb)) {
++ atomic_inc(&sk->sk_drops);
++ bh_unlock_sock(sk);
++ sock_put(sk);
++ goto discard;
++ }
+ bh_unlock_sock(sk);
+ sock_put(sk);
+ return 0;
+--
+1.7.1
+
Added: dists/sid/linux-2.6/debian/patches/bugfix/all/x25-use-limited-socket-backlog.patch
==============================================================================
--- /dev/null 00:00:00 1970 (empty, because file is newly added)
+++ dists/sid/linux-2.6/debian/patches/bugfix/all/x25-use-limited-socket-backlog.patch Thu Sep 9 03:19:15 2010 (r16253)
@@ -0,0 +1,34 @@
+From 3da74ff044980a2e88fae39c785218ccbc1e80e9 Mon Sep 17 00:00:00 2001
+From: Zhu Yi <yi.zhu at intel.com>
+Date: Thu, 4 Mar 2010 18:01:46 +0000
+Subject: [PATCH 7/8] x25: use limited socket backlog
+
+[ Upstream commit 2499849ee8f513e795b9f2c19a42d6356e4943a4 ]
+
+Make x25 adapt to the limited socket backlog change.
+
+Cc: Andrew Hendry <andrew.hendry at gmail.com>
+Signed-off-by: Zhu Yi <yi.zhu at intel.com>
+Acked-by: Eric Dumazet <eric.dumazet at gmail.com>
+Signed-off-by: David S. Miller <davem at davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh at suse.de>
+---
+ net/x25/x25_dev.c | 2 +-
+ 1 files changed, 1 insertions(+), 1 deletions(-)
+
+diff --git a/net/x25/x25_dev.c b/net/x25/x25_dev.c
+index 3e1efe5..a9da0dc 100644
+--- a/net/x25/x25_dev.c
++++ b/net/x25/x25_dev.c
+@@ -53,7 +53,7 @@ static int x25_receive_data(struct sk_buff *skb, struct x25_neigh *nb)
+ if (!sock_owned_by_user(sk)) {
+ queued = x25_process_rx_frame(sk, skb);
+ } else {
+- sk_add_backlog(sk, skb);
++ queued = !sk_add_backlog_limited(sk, skb);
+ }
+ bh_unlock_sock(sk);
+ sock_put(sk);
+--
+1.7.1
+
Added: dists/sid/linux-2.6/debian/patches/debian/net-Avoid-ABI-change-from-limit-for-socket-backlog.patch
==============================================================================
--- /dev/null 00:00:00 1970 (empty, because file is newly added)
+++ dists/sid/linux-2.6/debian/patches/debian/net-Avoid-ABI-change-from-limit-for-socket-backlog.patch Thu Sep 9 03:19:15 2010 (r16253)
@@ -0,0 +1,96 @@
+From: Ben Hutchings <ben at decadent.org.uk>
+Date: Thu, 9 Sep 2010 03:46:50 +0100
+Subject: [PATCH 8/8] net: Avoid ABI change from limit for socket backlog
+
+Move the new fields to the end of struct sock and hide them from genksyms.
+---
+ include/net/sock.h | 10 ++++++----
+ net/core/sock.c | 6 +++---
+ net/sctp/socket.c | 2 +-
+ 3 files changed, 10 insertions(+), 8 deletions(-)
+
+diff --git a/include/net/sock.h b/include/net/sock.h
+index d04a1ab..e85971f 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -242,8 +242,6 @@ struct sock {
+ struct {
+ struct sk_buff *head;
+ struct sk_buff *tail;
+- int len;
+- int limit;
+ } sk_backlog;
+ wait_queue_head_t *sk_sleep;
+ struct dst_entry *sk_dst_cache;
+@@ -303,6 +301,10 @@ struct sock {
+ int (*sk_backlog_rcv)(struct sock *sk,
+ struct sk_buff *skb);
+ void (*sk_destruct)(struct sock *sk);
++#ifndef __GENKSYMS__
++ int sk_backlog_len;
++ int sk_backlog_limit;
++#endif
+ };
+
+ /*
+@@ -578,11 +580,11 @@ static inline void sk_add_backlog(struct sock *sk, struct sk_buff *skb)
+ /* The per-socket spinlock must be held here. */
+ static inline int sk_add_backlog_limited(struct sock *sk, struct sk_buff *skb)
+ {
+- if (sk->sk_backlog.len >= max(sk->sk_backlog.limit, sk->sk_rcvbuf << 1))
++ if (sk->sk_backlog_len >= max(sk->sk_backlog_limit, sk->sk_rcvbuf << 1))
+ return -ENOBUFS;
+
+ sk_add_backlog(sk, skb);
+- sk->sk_backlog.len += skb->truesize;
++ sk->sk_backlog_len += skb->truesize;
+ return 0;
+ }
+
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 5797dab..31e02d3 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -1118,7 +1118,7 @@ struct sock *sk_clone(const struct sock *sk, const gfp_t priority)
+ sock_lock_init(newsk);
+ bh_lock_sock(newsk);
+ newsk->sk_backlog.head = newsk->sk_backlog.tail = NULL;
+- newsk->sk_backlog.len = 0;
++ newsk->sk_backlog_len = 0;
+
+ atomic_set(&newsk->sk_rmem_alloc, 0);
+ /*
+@@ -1527,7 +1527,7 @@ static void __release_sock(struct sock *sk)
+ * Doing the zeroing here guarantee we can not loop forever
+ * while a wild producer attempts to flood us.
+ */
+- sk->sk_backlog.len = 0;
++ sk->sk_backlog_len = 0;
+ }
+
+ /**
+@@ -1860,7 +1860,7 @@ void sock_init_data(struct socket *sock, struct sock *sk)
+ sk->sk_allocation = GFP_KERNEL;
+ sk->sk_rcvbuf = sysctl_rmem_default;
+ sk->sk_sndbuf = sysctl_wmem_default;
+- sk->sk_backlog.limit = sk->sk_rcvbuf << 1;
++ sk->sk_backlog_limit = sk->sk_rcvbuf << 1;
+ sk->sk_state = TCP_CLOSE;
+ sk_set_socket(sk, sock);
+
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 374dfe5..84ab523 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -3720,7 +3720,7 @@ SCTP_STATIC int sctp_init_sock(struct sock *sk)
+ percpu_counter_inc(&sctp_sockets_allocated);
+
+ /* Set socket backlog limit. */
+- sk->sk_backlog.limit = sysctl_sctp_rmem[1];
++ sk->sk_backlog_limit = sysctl_sctp_rmem[1];
+
+ local_bh_disable();
+ sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);
+--
+1.7.1
+
Modified: dists/sid/linux-2.6/debian/patches/series/22
==============================================================================
--- dists/sid/linux-2.6/debian/patches/series/22 Thu Sep 9 02:34:10 2010 (r16252)
+++ dists/sid/linux-2.6/debian/patches/series/22 Thu Sep 9 03:19:15 2010 (r16253)
@@ -105,3 +105,11 @@
- debian/revert-x86-paravirt-Add-a-global-synchronization-point.patch
- debian/revert-sched-cputime-Introduce-thread_group_times.patch
+ debian/sched-Avoid-ABI-change-from-thread_group_times.patch
++ bugfix/all/net-add-limit-for-socket-backlog.patch
++ bugfix/all/tcp-use-limited-socket-backlog.patch
++ bugfix/all/udp-use-limited-socket-backlog.patch
++ bugfix/all/llc-use-limited-socket-backlog.patch
++ bugfix/all/sctp-use-limited-socket-backlog.patch
++ bugfix/all/tipc-use-limited-socket-backlog.patch
++ bugfix/all/x25-use-limited-socket-backlog.patch
++ debian/net-Avoid-ABI-change-from-limit-for-socket-backlog.patch
More information about the Kernel-svn-changes
mailing list