[kernel] r16525 - in dists/sid/linux-2.6/debian: . patches/debian patches/features/all/openvz patches/series

Maximilian Attems maks at alioth.debian.org
Tue Nov 2 16:36:49 UTC 2010


Author: maks
Date: Tue Nov  2 16:36:34 2010
New Revision: 16525

Log:
update openvz to latest patchset

* applies to 2.6.32.25 (minor our ABI non-breaking changes).
* merged the outstanding patches
* has as feature enhancement ppp support, bugfixes for sysfs wirless
  cards naming, GSO fixes

rename series 27-extra to 28-extra.

Added:
   dists/sid/linux-2.6/debian/patches/series/28-extra
      - copied, changed from r16524, dists/sid/linux-2.6/debian/patches/series/27-extra
Deleted:
   dists/sid/linux-2.6/debian/patches/debian/revert-tcp-Combat-per-cpu-skew-in-orphan-tests.patch
   dists/sid/linux-2.6/debian/patches/features/all/openvz/cfq-iosched-do-not-force-idling-for-sync-workload.patch
   dists/sid/linux-2.6/debian/patches/features/all/openvz/openvz-printk-handle-global-log-buffer-realloc.patch
   dists/sid/linux-2.6/debian/patches/series/27-extra
Modified:
   dists/sid/linux-2.6/debian/changelog
   dists/sid/linux-2.6/debian/patches/features/all/openvz/openvz.patch

Modified: dists/sid/linux-2.6/debian/changelog
==============================================================================
--- dists/sid/linux-2.6/debian/changelog	Tue Nov  2 10:45:13 2010	(r16524)
+++ dists/sid/linux-2.6/debian/changelog	Tue Nov  2 16:36:34 2010	(r16525)
@@ -6,6 +6,7 @@
   * Newer Standards-Version 3.9.1 without changes.
   * drm/ttm: Clear the ghost cpu_writers flag on ttm_buffer_object_transfer.
   * drm/nouveau: fix race condition when under memory pressure.
+  * [openvz] Update upstream patch to 2.6.32-dzhanibekov.
 
  -- maximilian attems <maks at debian.org>  Sat, 30 Oct 2010 14:14:37 +0200
 

Modified: dists/sid/linux-2.6/debian/patches/features/all/openvz/openvz.patch
==============================================================================
--- dists/sid/linux-2.6/debian/patches/features/all/openvz/openvz.patch	Tue Nov  2 10:45:13 2010	(r16524)
+++ dists/sid/linux-2.6/debian/patches/features/all/openvz/openvz.patch	Tue Nov  2 16:36:34 2010	(r16525)
@@ -1,3 +1,150 @@
+commit f3d52fc5575aa3bbd8bc270b448307736ca2ce33
+Author: Pavel Emelyanov <xemul at openvz.org>
+Date:   Mon Nov 1 14:36:24 2010 +0300
+
+    OpenVZ kernel 2.6.32-dzhanibekov released
+    
+    Named after Vladimir Aleksandrovich Dzhanibekov - a soviet cosmonaut
+    
+    Signed-off-by: Pavel Emelyanov <xemul at openvz.org>
+
+commit 877ea29bb755fe88d58e02e61f11399eff22ca0d
+Author: Pavel Emelyanov <xemul at openvz.org>
+Date:   Mon Nov 1 14:24:29 2010 +0300
+
+    slab: Compilation fix for !SLABINFO case
+    
+    http://bugzilla.openvz.org/show_bug.cgi?id=1535
+    
+    Signed-off-by: Pavel Emelyanov <xemul at openvz.org>
+
+commit 8491289d8589f3f0e228b7c1859adfde57c572fe
+Author: Pavel Emelyanov <xemul at openvz.org>
+Date:   Mon Nov 1 14:17:34 2010 +0300
+
+    vzdq: Compilation fix for no-ugid-quota case
+    
+    http://bugzilla.openvz.org/show_bug.cgi?id=1503
+    
+    Signed-off-by: Pavel Emelyanov <xemul at openvz.org>
+
+commit e924167368da165693d6401ebe8eed582857e098
+Author: Cyrill Gorcunov <gorcunov at gmail.com>
+Date:   Mon Nov 1 13:24:25 2010 +0300
+
+    Restore PPP virtualization
+    
+    ppp: Restore virtualization
+    
+    During migration to 2.6.32 we've lost some virtualization
+    features in PPP facility. Get it back.
+    
+    Signed-off-by: Cyrill Gorcunov <gorcunov at gmail.com>
+    Signed-off-by: Pavel Emelyanov <xemul at openvz.org>
+
+commit 62c8dc47f9fe7ee634787740205908b104a2931e
+Author: Ben Hutchings <ben at decadent.org.uk>
+Date:   Sun Oct 17 02:24:28 2010 +0100
+
+    printk: Handle global log buffer reallocation
+    
+    Currently an increase in log_buf_len results in disaster, as
+    ve0.log_buf is left pointing to the old log buffer.
+    
+    Update ve0.log_buf when the global log buffer is reallocated.  Also
+    acquire logbuf_lock before reading ve_log_buf_len, to avoid a race
+    with reallocation.
+    
+    Reported-and-tested-by: Tim Small <tim at seoss.co.uk>
+    Signed-off-by: Ben Hutchings <ben at decadent.org.uk>
+    Signed-off-by: maximilian attems <max at stro.at>
+    Signed-off-by: Pavel Emelyanov <xemul at openvz.org>
+
+commit 6f60f358ace4892e38137151af452c3949154b20
+Author: Cyrill Gorcunov <gorcunov at openvz.org>
+Date:   Mon Oct 25 18:27:26 2010 +0400
+
+    net, sysfs: Allow to move netdevice sysfs links between namespaces
+    
+    The kernel already tuned for this no need to check for init_net
+    
+    http://bugzilla.openvz.org/show_bug.cgi?id=1513
+    
+    Signed-off-by: Cyrill Gorcunov <gorcunov at openvz.org>
+    Signed-off-by: Pavel Emelyanov <xemul at openvz.org>
+
+commit 36257fa14f5557aada1deaaa5b936cdc690af0f7
+Author: Andrey Vagin <avagin at openvz.org>
+Date:   Fri Oct 8 12:18:40 2010 +0400
+
+    net: use correct skb for GSO case
+    
+    GSO code use the variable nskb
+    http://bugzilla.openvz.org/show_bug.cgi?id=1634
+    
+    Signed-off-by: Andrey Vagin <avagin at openvz.org>
+    Signed-off-by: Pavel Emelyanov <xemul at openvz.org>
+
+commit 12b9937d06add8bcd3304d7c1e47707b4becaf8e
+Author: Andrey Vagin <avagin at openvz.org>
+Date:   Fri Oct 8 12:18:39 2010 +0400
+
+    net: release dst entry while cache-hot for GSO case too
+    
+    Non-GSO code drops dst entry for performance reasons, but
+    the same is missing for GSO code. Drop dst while cache-hot
+    for GSO case too.
+    
+    this patch has been backported from mainstream because of
+    http://bugzilla.openvz.org/show_bug.cgi?id=1634
+    
+    Signed-off-by: Krishna Kumar <krkumar2 at in.ibm.com>
+    Acked-by: Eric Dumazet <eric.dumazet at gmail.com>
+    Signed-off-by: David S. Miller <davem at davemloft.net>
+    Signed-off-by: Andrey Vagin <avagin at openvz.org>
+    Signed-off-by: Pavel Emelyanov <xemul at openvz.org>
+
+commit 6909d4328e3766709e40f6cbcb7fc2a8f3718fc8
+Author: Konstantin Khlebnikov <khlebnikov at openvz.org>
+Date:   Wed Sep 1 16:27:50 2010 +0400
+
+    cfq-iosched: do not force idling for sync workload
+    
+    revert v2.6.32-108-gc04645e
+    blkio: Wait on sync-noidle queue even if rq_noidle = 1
+    by Vivek Goyal <vgoyal at redhat.com>
+    
+    and piece of v2.6.32-rc5-486-g8e55063
+    cfq-iosched: fix corner cases in idling logic
+    by Corrado Zoccolo <czoccolo at gmail.com>
+    
+    fix perfomance degradation for massive write-fsync pattern:
+    # sysbench --test=fileio --file-num=1 --file-total-size=1G --file-fsync-all=on \
+    --file-test-mode=seqwr --max-time=10 --file-block-size=4096 --max-requests=0 run
+    
+    http://bugzilla.openvz.org/show_bug.cgi?id=1622
+    
+    Signed-off-by: Konstantin Khlebnikov <khlebnikov at openvz.org>
+    Signed-off-by: Pavel Emelyanov <xemul at openvz.org>
+
+commit 01cd32b7577f0e8ed795617e360bc02bf86617c7
+Merge: 763921f 8063013
+Author: Pavel Emelyanov <xemul at openvz.org>
+Date:   Mon Nov 1 13:07:38 2010 +0300
+
+    Merged linux-2.6.32.25
+    
+    Conflicts:
+    
+    	Makefile
+    	net/ipv4/tcp.c
+    	net/ipv4/tcp_timer.c
+    
+    I had to rewrite the per-bc orphan management due to
+    a89d316f (tcp: Combat per-cpu skew in orphan tests).
+    
+    Signed-off-by: Pavel Emelyanov <xemul at openvz.org>
+
 commit 763921f076cdfd79359ffdc279edf8ee45d31691
 Author: Pavel Emelyanov <xemul at openvz.org>
 Date:   Tue Sep 21 18:24:37 2010 +0400
@@ -6130,9 +6277,6 @@
     Neither compiles, nor works.
     
     Signed-off-by: Pavel Emelyanov <xemul at openvz.org>
-
-[bwh: Adjust context for 2.6.32.25]
-
 diff --git a/COPYING.Parallels b/COPYING.Parallels
 new file mode 100644
 index 0000000..9856a2b
@@ -6490,14 +6634,14 @@
 +library.  If this is what you want to do, use the GNU Library General
 +Public License instead of this License.
 diff --git a/Makefile b/Makefile
-index 1786938..c11ec6e 100644
+index 2b6c7bd..f0c5190 100644
 --- a/Makefile
 +++ b/Makefile
 @@ -2,6 +2,7 @@ VERSION = 2
  PATCHLEVEL = 6
  SUBLEVEL = 32
  EXTRAVERSION =
-+VZVERSION = dyomin
++VZVERSION = dzhanibekov
  NAME = Man-Eating Seals of Antiquity
  
  # *DOCUMENTATION*
@@ -8122,7 +8266,7 @@
  		q->unplug_delay = 1;
  
 diff --git a/block/bsg.c b/block/bsg.c
-index 0676301..a9fd2d8 100644
+index 7154a7a..ae14805 100644
 --- a/block/bsg.c
 +++ b/block/bsg.c
 @@ -15,6 +15,7 @@
@@ -8143,7 +8287,7 @@
  		rq->timeout = q->sg_timeout;
  	if (!rq->timeout)
 diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
-index aa1e953..023f4e6 100644
+index aa1e953..b68b633 100644
 --- a/block/cfq-iosched.c
 +++ b/block/cfq-iosched.c
 @@ -9,9 +9,11 @@
@@ -10161,7 +10305,7 @@
  	}
  
  	/*
-@@ -2234,18 +3310,39 @@ static void cfq_completed_request(struct request_queue *q, struct request *rq)
+@@ -2234,18 +3310,32 @@ static void cfq_completed_request(struct request_queue *q, struct request *rq)
  			cfq_set_prio_slice(cfqd, cfqq);
  			cfq_clear_cfqq_slice_new(cfqq);
  		}
@@ -10196,20 +10340,13 @@
 +		else if (sync && cfqq_empty &&
 +			 !cfq_close_cooperator(cfqd, cfqq)) {
 +			cfqd->noidle_tree_requires_idle |= !rq_noidle(rq);
-+			/*
-+			 * Idling is enabled for SYNC_WORKLOAD.
-+			 * SYNC_NOIDLE_WORKLOAD idles at the end of the tree
-+			 * only if we processed at least one !rq_noidle request
-+			 */
-+			if (cfqd->serving_type == SYNC_WORKLOAD
-+			    || cfqd->noidle_tree_requires_idle
-+			    || cfqq->cfqg->nr_cfqq == 1)
++			if (cfqd->noidle_tree_requires_idle)
 +				cfq_arm_slice_timer(cfqd);
 +		}
  	}
  
  	if (!rq_in_driver(cfqd))
-@@ -2269,12 +3366,10 @@ static void cfq_prio_boost(struct cfq_queue *cfqq)
+@@ -2269,12 +3359,10 @@ static void cfq_prio_boost(struct cfq_queue *cfqq)
  			cfqq->ioprio = IOPRIO_NORM;
  	} else {
  		/*
@@ -10225,7 +10362,7 @@
  	}
  }
  
-@@ -2338,6 +3433,35 @@ static void cfq_put_request(struct request *rq)
+@@ -2338,6 +3426,35 @@ static void cfq_put_request(struct request *rq)
  	}
  }
  
@@ -10261,7 +10398,7 @@
  /*
   * Allocate cfq data structures associated with this request.
   */
-@@ -2360,10 +3484,30 @@ cfq_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
+@@ -2360,10 +3477,30 @@ cfq_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
  	if (!cic)
  		goto queue_fail;
  
@@ -10292,7 +10429,7 @@
  	}
  
  	cfqq->allocated[rw]++;
-@@ -2438,6 +3582,11 @@ static void cfq_idle_slice_timer(unsigned long data)
+@@ -2438,6 +3575,11 @@ static void cfq_idle_slice_timer(unsigned long data)
  		 */
  		if (!RB_EMPTY_ROOT(&cfqq->sort_list))
  			goto out_kick;
@@ -10304,7 +10441,7 @@
  	}
  expire:
  	cfq_slice_expired(cfqd, timed_out);
-@@ -2468,6 +3617,11 @@ static void cfq_put_async_queues(struct cfq_data *cfqd)
+@@ -2468,6 +3610,11 @@ static void cfq_put_async_queues(struct cfq_data *cfqd)
  		cfq_put_queue(cfqd->async_idle_cfqq);
  }
  
@@ -10316,7 +10453,7 @@
  static void cfq_exit_queue(struct elevator_queue *e)
  {
  	struct cfq_data *cfqd = e->elevator_data;
-@@ -2489,25 +3643,49 @@ static void cfq_exit_queue(struct elevator_queue *e)
+@@ -2489,25 +3636,49 @@ static void cfq_exit_queue(struct elevator_queue *e)
  	}
  
  	cfq_put_async_queues(cfqd);
@@ -10369,7 +10506,7 @@
  	/*
  	 * Not strictly needed (since RB_ROOT just clears the node and we
  	 * zeroed cfqd on alloc), but better be safe in case someone decides
-@@ -2523,6 +3701,7 @@ static void *cfq_init_queue(struct request_queue *q)
+@@ -2523,6 +3694,7 @@ static void *cfq_init_queue(struct request_queue *q)
  	 */
  	cfq_init_cfqq(cfqd, &cfqd->oom_cfqq, 1, 0);
  	atomic_inc(&cfqd->oom_cfqq.ref);
@@ -10377,7 +10514,7 @@
  
  	INIT_LIST_HEAD(&cfqd->cic_list);
  
-@@ -2544,8 +3723,14 @@ static void *cfq_init_queue(struct request_queue *q)
+@@ -2544,8 +3716,14 @@ static void *cfq_init_queue(struct request_queue *q)
  	cfqd->cfq_slice_async_rq = cfq_slice_async_rq;
  	cfqd->cfq_slice_idle = cfq_slice_idle;
  	cfqd->cfq_latency = 1;
@@ -10394,7 +10531,7 @@
  	return cfqd;
  }
  
-@@ -2614,6 +3799,7 @@ SHOW_FUNCTION(cfq_slice_sync_show, cfqd->cfq_slice[1], 1);
+@@ -2614,6 +3792,7 @@ SHOW_FUNCTION(cfq_slice_sync_show, cfqd->cfq_slice[1], 1);
  SHOW_FUNCTION(cfq_slice_async_show, cfqd->cfq_slice[0], 1);
  SHOW_FUNCTION(cfq_slice_async_rq_show, cfqd->cfq_slice_async_rq, 0);
  SHOW_FUNCTION(cfq_low_latency_show, cfqd->cfq_latency, 0);
@@ -10402,7 +10539,7 @@
  #undef SHOW_FUNCTION
  
  #define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV)			\
-@@ -2646,6 +3832,7 @@ STORE_FUNCTION(cfq_slice_async_store, &cfqd->cfq_slice[0], 1, UINT_MAX, 1);
+@@ -2646,6 +3825,7 @@ STORE_FUNCTION(cfq_slice_async_store, &cfqd->cfq_slice[0], 1, UINT_MAX, 1);
  STORE_FUNCTION(cfq_slice_async_rq_store, &cfqd->cfq_slice_async_rq, 1,
  		UINT_MAX, 0);
  STORE_FUNCTION(cfq_low_latency_store, &cfqd->cfq_latency, 0, 1, 0);
@@ -10410,7 +10547,7 @@
  #undef STORE_FUNCTION
  
  #define CFQ_ATTR(name) \
-@@ -2662,6 +3849,7 @@ static struct elv_fs_entry cfq_attrs[] = {
+@@ -2662,6 +3842,7 @@ static struct elv_fs_entry cfq_attrs[] = {
  	CFQ_ATTR(slice_async_rq),
  	CFQ_ATTR(slice_idle),
  	CFQ_ATTR(low_latency),
@@ -10418,7 +10555,7 @@
  	__ATTR_NULL
  };
  
-@@ -2691,6 +3879,17 @@ static struct elevator_type iosched_cfq = {
+@@ -2691,6 +3872,17 @@ static struct elevator_type iosched_cfq = {
  	.elevator_owner =	THIS_MODULE,
  };
  
@@ -10436,7 +10573,7 @@
  static int __init cfq_init(void)
  {
  	/*
-@@ -2705,6 +3904,7 @@ static int __init cfq_init(void)
+@@ -2705,6 +3897,7 @@ static int __init cfq_init(void)
  		return -ENOMEM;
  
  	elv_register(&iosched_cfq);
@@ -10444,7 +10581,7 @@
  
  	return 0;
  }
-@@ -2712,6 +3912,7 @@ static int __init cfq_init(void)
+@@ -2712,6 +3905,7 @@ static int __init cfq_init(void)
  static void __exit cfq_exit(void)
  {
  	DECLARE_COMPLETION_ONSTACK(all_gone);
@@ -12262,8 +12399,60 @@
 +MODULE_AUTHOR("SWsoft <info at sw-soft.com>");
 +MODULE_DESCRIPTION("Virtuozzo Virtual Network Device");
 +MODULE_LICENSE("GPL v2");
+diff --git a/drivers/net/ppp_generic.c b/drivers/net/ppp_generic.c
+index 965adb6..f8545d0 100644
+--- a/drivers/net/ppp_generic.c
++++ b/drivers/net/ppp_generic.c
+@@ -53,6 +53,9 @@
+ #include <net/net_namespace.h>
+ #include <net/netns/generic.h>
+ 
++#include <linux/ve_task.h>
++#include <linux/vzcalluser.h>
++
+ #define PPP_VERSION	"2.4.2"
+ 
+ /*
+@@ -368,6 +371,8 @@ static int ppp_open(struct inode *inode, struct file *file)
+ 	 */
+ 	if (!capable(CAP_NET_ADMIN))
+ 		return -EPERM;
++	if (!net_generic(get_exec_env()->ve_netns, ppp_net_id)) /* no VE_FEATURE_PPP */
++		return -EACCES;
+ 	return 0;
+ }
+ 
+@@ -867,6 +872,9 @@ static __net_init int ppp_init_net(struct net *net)
+ 	struct ppp_net *pn;
+ 	int err;
+ 
++	if (!(get_exec_env()->features & VE_FEATURE_PPP))
++		return 0;
++
+ 	pn = kzalloc(sizeof(*pn), GFP_KERNEL);
+ 	if (!pn)
+ 		return -ENOMEM;
+@@ -893,6 +901,9 @@ static __net_exit void ppp_exit_net(struct net *net)
+ 	struct ppp_net *pn;
+ 
+ 	pn = net_generic(net, ppp_net_id);
++	if (!pn) /* no VE_FEATURE_PPP */
++		return;
++
+ 	idr_destroy(&pn->units_idr);
+ 	/*
+ 	 * if someone has cached our net then
+@@ -1053,7 +1064,7 @@ static void ppp_setup(struct net_device *dev)
+ 	dev->tx_queue_len = 3;
+ 	dev->type = ARPHRD_PPP;
+ 	dev->flags = IFF_POINTOPOINT | IFF_NOARP | IFF_MULTICAST;
+-	dev->features |= NETIF_F_NETNS_LOCAL;
++	dev->features |= NETIF_F_NETNS_LOCAL | NETIF_F_VIRTUAL;
+ 	dev->priv_flags &= ~IFF_XMIT_DST_RELEASE;
+ }
+ 
 diff --git a/drivers/net/pppoe.c b/drivers/net/pppoe.c
-index 2559991..19d17f0 100644
+index 2559991..326958b 100644
 --- a/drivers/net/pppoe.c
 +++ b/drivers/net/pppoe.c
 @@ -77,6 +77,7 @@
@@ -12284,8 +12473,31 @@
  	sk = sk_alloc(net, PF_PPPOX, GFP_KERNEL, &pppoe_sk_proto);
  	if (!sk)
  		return -ENOMEM;
+@@ -1144,6 +1148,9 @@ static __net_init int pppoe_init_net(struct net *net)
+ 	struct proc_dir_entry *pde;
+ 	int err;
+ 
++	if (!(get_exec_env()->features & VE_FEATURE_PPP))
++		return 0;
++
+ 	pn = kzalloc(sizeof(*pn), GFP_KERNEL);
+ 	if (!pn)
+ 		return -ENOMEM;
+@@ -1173,8 +1180,11 @@ static __net_exit void pppoe_exit_net(struct net *net)
+ {
+ 	struct pppoe_net *pn;
+ 
+-	proc_net_remove(net, "pppoe");
+ 	pn = net_generic(net, pppoe_net_id);
++	if (!pn) /* no VE_FEATURE_PPP */
++		return;
++
++	proc_net_remove(net, "pppoe");
+ 	/*
+ 	 * if someone has cached our net then
+ 	 * further net_generic call will return NULL
 diff --git a/drivers/net/pppol2tp.c b/drivers/net/pppol2tp.c
-index b724d7f..c457a95 100644
+index b724d7f..4384875 100644
 --- a/drivers/net/pppol2tp.c
 +++ b/drivers/net/pppol2tp.c
 @@ -97,6 +97,7 @@
@@ -12306,6 +12518,29 @@
  	sk = sk_alloc(net, PF_PPPOX, GFP_KERNEL, &pppol2tp_sk_proto);
  	if (!sk)
  		goto out;
+@@ -2606,6 +2610,9 @@ static __net_init int pppol2tp_init_net(struct net *net)
+ 	struct proc_dir_entry *pde;
+ 	int err;
+ 
++	if (!(get_exec_env()->features & VE_FEATURE_PPP))
++		return 0;
++
+ 	pn = kzalloc(sizeof(*pn), GFP_KERNEL);
+ 	if (!pn)
+ 		return -ENOMEM;
+@@ -2636,8 +2643,11 @@ static __net_exit void pppol2tp_exit_net(struct net *net)
+ {
+ 	struct pppoe_net *pn;
+ 
+-	proc_net_remove(net, "pppol2tp");
+ 	pn = net_generic(net, pppol2tp_net_id);
++	if (!pn) /* no VE_FEATURE_PPP */
++		return;
++
++	proc_net_remove(net, "pppol2tp");
+ 	/*
+ 	 * if someone has cached our net then
+ 	 * further net_generic call will return NULL
 diff --git a/drivers/net/tun.c b/drivers/net/tun.c
 index 0f77aca..a052759 100644
 --- a/drivers/net/tun.c
@@ -14454,7 +14689,7 @@
  obj-y				+= partitions/
  obj-$(CONFIG_SYSFS)		+= sysfs/
 diff --git a/fs/aio.c b/fs/aio.c
-index 02a2c93..1f18b09 100644
+index b84a769..11f1e99 100644
 --- a/fs/aio.c
 +++ b/fs/aio.c
 @@ -43,13 +43,16 @@
@@ -16115,7 +16350,7 @@
  /*
   * The following function implements the controller interface for
 diff --git a/fs/exec.c b/fs/exec.c
-index 56da15f..6ea8efa 100644
+index a0410eb..d2272be 100644
 --- a/fs/exec.c
 +++ b/fs/exec.c
 @@ -26,6 +26,7 @@
@@ -16170,7 +16405,7 @@
  	return err;
  }
  
-@@ -711,10 +724,11 @@ int kernel_read(struct file *file, loff_t offset,
+@@ -725,10 +738,11 @@ int kernel_read(struct file *file, loff_t offset,
  
  EXPORT_SYMBOL(kernel_read);
  
@@ -16184,7 +16419,7 @@
  
  	/* Notify parent that we're no longer interested in the old VM */
  	tsk = current;
-@@ -734,6 +748,10 @@ static int exec_mmap(struct mm_struct *mm)
+@@ -748,6 +762,10 @@ static int exec_mmap(struct mm_struct *mm)
  			return -EINTR;
  		}
  	}
@@ -16195,7 +16430,7 @@
  	task_lock(tsk);
  	active_mm = tsk->active_mm;
  	tsk->mm = mm;
-@@ -741,15 +759,25 @@ static int exec_mmap(struct mm_struct *mm)
+@@ -755,15 +773,25 @@ static int exec_mmap(struct mm_struct *mm)
  	activate_mm(active_mm, mm);
  	task_unlock(tsk);
  	arch_pick_mmap_layout(mm);
@@ -16223,7 +16458,7 @@
  }
  
  /*
-@@ -844,6 +872,10 @@ static int de_thread(struct task_struct *tsk)
+@@ -858,6 +886,10 @@ static int de_thread(struct task_struct *tsk)
  		transfer_pid(leader, tsk, PIDTYPE_PGID);
  		transfer_pid(leader, tsk, PIDTYPE_SID);
  		list_replace_rcu(&leader->tasks, &tsk->tasks);
@@ -16234,7 +16469,7 @@
  
  		tsk->group_leader = tsk;
  		leader->group_leader = tsk;
-@@ -962,12 +994,10 @@ int flush_old_exec(struct linux_binprm * bprm)
+@@ -976,12 +1008,10 @@ int flush_old_exec(struct linux_binprm * bprm)
  	/*
  	 * Release all of the old mmap stuff
  	 */
@@ -16248,7 +16483,7 @@
  	current->flags &= ~PF_RANDOMIZE;
  	flush_thread();
  	current->personality &= ~bprm->per_clear;
-@@ -1315,6 +1345,10 @@ int do_execve(char * filename,
+@@ -1329,6 +1359,10 @@ int do_execve(char * filename,
  	bool clear_in_exec;
  	int retval;
  
@@ -16259,7 +16494,7 @@
  	retval = unshare_files(&displaced);
  	if (retval)
  		goto out_ret;
-@@ -1566,7 +1600,7 @@ static int zap_process(struct task_struct *start)
+@@ -1580,7 +1614,7 @@ static int zap_process(struct task_struct *start)
  			signal_wake_up(t, 1);
  			nr++;
  		}
@@ -16268,7 +16503,7 @@
  
  	return nr;
  }
-@@ -1621,7 +1655,7 @@ static inline int zap_threads(struct task_struct *tsk, struct mm_struct *mm,
+@@ -1635,7 +1669,7 @@ static inline int zap_threads(struct task_struct *tsk, struct mm_struct *mm,
  	 *	next_thread().
  	 */
  	rcu_read_lock();
@@ -16277,7 +16512,7 @@
  		if (g == tsk->group_leader)
  			continue;
  		if (g->flags & PF_KTHREAD)
-@@ -1636,7 +1670,7 @@ static inline int zap_threads(struct task_struct *tsk, struct mm_struct *mm,
+@@ -1650,7 +1684,7 @@ static inline int zap_threads(struct task_struct *tsk, struct mm_struct *mm,
  				}
  				break;
  			}
@@ -16286,7 +16521,7 @@
  	}
  	rcu_read_unlock();
  done:
-@@ -1804,7 +1838,7 @@ void do_coredump(long signr, int exit_code, struct pt_regs *regs)
+@@ -1818,7 +1852,7 @@ void do_coredump(long signr, int exit_code, struct pt_regs *regs)
  	/*
  	 * If another thread got here first, or we are not dumpable, bail out.
  	 */
@@ -16379,10 +16614,10 @@
  
  static int __init init_ext3_fs(void)
 diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
-index 99596fc..51c1399 100644
+index 1b23f9d..2e1d3dd 100644
 --- a/fs/ext4/inode.c
 +++ b/fs/ext4/inode.c
-@@ -5840,9 +5840,14 @@ int ext4_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf)
+@@ -5846,9 +5846,14 @@ int ext4_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf)
  	int ret = -EINVAL;
  	void *fsdata;
  	struct file *file = vma->vm_file;
@@ -18709,7 +18944,7 @@
  
  extern void inotify_ignored_and_remove_idr(struct fsnotify_mark_entry *entry,
 diff --git a/fs/notify/inotify/inotify_fsnotify.c b/fs/notify/inotify/inotify_fsnotify.c
-index e27960c..9b31a34 100644
+index 5d3d2a7..9698e45 100644
 --- a/fs/notify/inotify/inotify_fsnotify.c
 +++ b/fs/notify/inotify/inotify_fsnotify.c
 @@ -29,6 +29,7 @@
@@ -18720,7 +18955,7 @@
  
  #include "inotify.h"
  
-@@ -161,10 +162,25 @@ void inotify_free_event_priv(struct fsnotify_event_private_data *fsn_event_priv)
+@@ -164,10 +165,25 @@ void inotify_free_event_priv(struct fsnotify_event_private_data *fsn_event_priv)
  	kmem_cache_free(event_priv_cachep, event_priv);
  }
  
@@ -18747,7 +18982,7 @@
 +	.detach_mnt = inotify_detach_mnt,
  };
 diff --git a/fs/notify/inotify/inotify_user.c b/fs/notify/inotify/inotify_user.c
-index 22ef16a..d9909cd 100644
+index aef8f5d..b36e588 100644
 --- a/fs/notify/inotify/inotify_user.c
 +++ b/fs/notify/inotify/inotify_user.c
 @@ -40,6 +40,7 @@
@@ -18758,7 +18993,7 @@
  
  #include "inotify.h"
  
-@@ -340,7 +341,7 @@ static long inotify_ioctl(struct file *file, unsigned int cmd,
+@@ -343,7 +344,7 @@ static long inotify_ioctl(struct file *file, unsigned int cmd,
  	return ret;
  }
  
@@ -18767,7 +19002,7 @@
  	.poll		= inotify_poll,
  	.read		= inotify_read,
  	.fasync		= inotify_fasync,
-@@ -348,6 +349,7 @@ static const struct file_operations inotify_fops = {
+@@ -351,6 +352,7 @@ static const struct file_operations inotify_fops = {
  	.unlocked_ioctl	= inotify_ioctl,
  	.compat_ioctl	= inotify_ioctl,
  };
@@ -18775,7 +19010,7 @@
  
  
  /*
-@@ -461,6 +463,12 @@ static void inotify_free_mark(struct fsnotify_mark_entry *entry)
+@@ -464,6 +466,12 @@ static void inotify_free_mark(struct fsnotify_mark_entry *entry)
  {
  	struct inotify_inode_mark_entry *ientry = (struct inotify_inode_mark_entry *)entry;
  
@@ -18788,7 +19023,7 @@
  	kmem_cache_free(inotify_inode_mark_cachep, ientry);
  }
  
-@@ -527,16 +535,13 @@ static int inotify_update_existing_watch(struct fsnotify_group *group,
+@@ -530,16 +538,13 @@ static int inotify_update_existing_watch(struct fsnotify_group *group,
  	return ret;
  }
  
@@ -18808,7 +19043,7 @@
  	if (unlikely(!mask))
  		return -EINVAL;
  
-@@ -547,6 +552,8 @@ static int inotify_new_watch(struct fsnotify_group *group,
+@@ -550,6 +555,8 @@ static int inotify_new_watch(struct fsnotify_group *group,
  	fsnotify_init_mark(&tmp_ientry->fsn_entry, inotify_free_mark);
  	tmp_ientry->fsn_entry.mask = mask;
  	tmp_ientry->wd = -1;
@@ -18817,7 +19052,7 @@
  
  	ret = -ENOSPC;
  	if (atomic_read(&group->inotify_data.user->inotify_watches) >= inotify_max_user_watches)
-@@ -556,13 +563,16 @@ retry:
+@@ -559,13 +566,16 @@ retry:
  	if (unlikely(!idr_pre_get(&group->inotify_data.idr, GFP_KERNEL)))
  		goto out_err;
  
@@ -18836,7 +19071,7 @@
  	spin_unlock(&group->inotify_data.idr_lock);
  	if (ret) {
  		/* we didn't get on the idr, drop the idr reference */
-@@ -574,8 +584,15 @@ retry:
+@@ -577,8 +587,15 @@ retry:
  		goto out_err;
  	}
  
@@ -18853,7 +19088,7 @@
  	if (ret) {
  		/* we failed to get on the inode, get off the idr */
  		inotify_remove_from_idr(group, tmp_ientry);
-@@ -588,6 +605,12 @@ retry:
+@@ -591,6 +608,12 @@ retry:
  	/* increment the number of watches the user has */
  	atomic_inc(&group->inotify_data.user->inotify_watches);
  
@@ -18866,7 +19101,7 @@
  	/* return the watch descriptor for this new entry */
  	ret = tmp_ientry->wd;
  
-@@ -604,17 +627,24 @@ out_err:
+@@ -607,17 +630,24 @@ out_err:
  
  	return ret;
  }
@@ -18894,7 +19129,7 @@
  	/*
  	 * inotify_new_watch could race with another thread which did an
  	 * inotify_new_watch between the update_existing and the add watch
-@@ -714,12 +744,12 @@ SYSCALL_DEFINE0(inotify_init)
+@@ -717,12 +747,12 @@ SYSCALL_DEFINE0(inotify_init)
  {
  	return sys_inotify_init1(0);
  }
@@ -18908,7 +19143,7 @@
  	struct path path;
  	struct file *filp;
  	int ret, fput_needed;
-@@ -744,12 +774,10 @@ SYSCALL_DEFINE3(inotify_add_watch, int, fd, const char __user *, pathname,
+@@ -747,12 +777,10 @@ SYSCALL_DEFINE3(inotify_add_watch, int, fd, const char __user *, pathname,
  	if (ret)
  		goto fput_and_out;
  
@@ -22656,10 +22891,10 @@
 +#endif
 diff --git a/fs/quota/vzdquota/vzdq_ops.c b/fs/quota/vzdquota/vzdq_ops.c
 new file mode 100644
-index 0000000..904ff5e
+index 0000000..faa4d96
 --- /dev/null
 +++ b/fs/quota/vzdquota/vzdq_ops.c
-@@ -0,0 +1,644 @@
+@@ -0,0 +1,647 @@
 +/*
 + * Copyright (C) 2001, 2002, 2004, 2005  SWsoft
 + * All rights reserved.
@@ -23270,6 +23505,9 @@
 +	return QUOTA_OK;
 +}
 +
++static void vzquota_swap_inode(struct inode *inode, struct inode *tmpl)
++{
++}
 +#endif
 +
 +/*
@@ -29673,10 +29911,10 @@
 +#endif
 diff --git a/include/bc/sock_orphan.h b/include/bc/sock_orphan.h
 new file mode 100644
-index 0000000..b19a316
+index 0000000..c5b2412
 --- /dev/null
 +++ b/include/bc/sock_orphan.h
-@@ -0,0 +1,104 @@
+@@ -0,0 +1,102 @@
 +/*
 + *  include/bc/sock_orphan.h
 + *
@@ -29719,15 +29957,13 @@
 +}
 +
 +extern int __ub_too_many_orphans(struct sock *sk, int count);
-+static inline int ub_too_many_orphans(struct sock *sk, int count)
++static inline int ub_too_many_orphans(struct sock *sk, int shift)
 +{
 +#ifdef CONFIG_BEANCOUNTERS
-+	if (__ub_too_many_orphans(sk, count))
++	if (__ub_too_many_orphans(sk, shift))
 +		return 1;
 +#endif
-+	return (ub_get_orphan_count(sk) > sysctl_tcp_max_orphans ||
-+		(sk->sk_wmem_queued > SOCK_MIN_SNDBUF &&
-+		 atomic_read(&tcp_memory_allocated) > sysctl_tcp_mem[2]));
++	return tcp_too_many_orphans(sk, shift);
 +}
 +
 +#include <bc/kmem.h>
@@ -33459,10 +33695,10 @@
 +
  #endif
 diff --git a/include/linux/mm.h b/include/linux/mm.h
-index 24c3956..7bb1cf3 100644
+index 11e5be6..5a3b9cf 100644
 --- a/include/linux/mm.h
 +++ b/include/linux/mm.h
-@@ -712,6 +712,7 @@ extern void pagefault_out_of_memory(void);
+@@ -716,6 +716,7 @@ extern void pagefault_out_of_memory(void);
  extern void show_free_areas(void);
  
  int shmem_lock(struct file *file, int lock, struct user_struct *user);
@@ -33470,7 +33706,7 @@
  struct file *shmem_file_setup(const char *name, loff_t size, unsigned long flags);
  int shmem_zero_setup(struct vm_area_struct *);
  
-@@ -776,7 +777,9 @@ int walk_page_range(unsigned long addr, unsigned long end,
+@@ -780,7 +781,9 @@ int walk_page_range(unsigned long addr, unsigned long end,
  void free_pgd_range(struct mmu_gather *tlb, unsigned long addr,
  		unsigned long end, unsigned long floor, unsigned long ceiling);
  int copy_page_range(struct mm_struct *dst, struct mm_struct *src,
@@ -33481,7 +33717,7 @@
  void unmap_mapping_range(struct address_space *mapping,
  		loff_t const holebegin, loff_t const holelen, int even_cows);
  int follow_pfn(struct vm_area_struct *vma, unsigned long address,
-@@ -832,7 +835,7 @@ int __set_page_dirty_nobuffers(struct page *page);
+@@ -836,7 +839,7 @@ int __set_page_dirty_nobuffers(struct page *page);
  int __set_page_dirty_no_writeback(struct page *page);
  int redirty_page_for_writepage(struct writeback_control *wbc,
  				struct page *page);
@@ -33490,7 +33726,7 @@
  int set_page_dirty(struct page *page);
  int set_page_dirty_lock(struct page *page);
  int clear_page_dirty_for_io(struct page *page);
-@@ -1294,7 +1297,12 @@ unsigned long shrink_slab(unsigned long scanned, gfp_t gfp_mask,
+@@ -1306,7 +1309,12 @@ unsigned long shrink_slab(unsigned long scanned, gfp_t gfp_mask,
  #ifndef CONFIG_MMU
  #define randomize_va_space 0
  #else
@@ -35053,7 +35289,7 @@
  {
  	skb->queue_mapping = queue_mapping;
 diff --git a/include/linux/slab.h b/include/linux/slab.h
-index 2da8372..c6e898d 100644
+index 2da8372..426eec4 100644
 --- a/include/linux/slab.h
 +++ b/include/linux/slab.h
 @@ -88,6 +88,26 @@
@@ -35083,11 +35319,15 @@
   * struct kmem_cache related prototypes
   */
  void __init kmem_cache_init(void);
-@@ -102,7 +122,20 @@ void kmem_cache_free(struct kmem_cache *, void *);
+@@ -102,7 +122,24 @@ void kmem_cache_free(struct kmem_cache *, void *);
  unsigned int kmem_cache_size(struct kmem_cache *);
  const char *kmem_cache_name(struct kmem_cache *);
  int kmem_ptr_validate(struct kmem_cache *cachep, const void *ptr);
++#ifdef CONFIG_SLABINFO
 +extern void show_slab_info(void);
++#else
++#define show_slab_info()	do { } while (0)
++#endif
 +int kmem_cache_objuse(struct kmem_cache *cachep);
 +int kmem_obj_objuse(void *obj);
 +int kmem_dname_objuse(void *obj);
@@ -35241,7 +35481,7 @@
  		if (!s)
  			return ZERO_SIZE_PTR;
 diff --git a/include/linux/socket.h b/include/linux/socket.h
-index 3273a0c..87cf3d1 100644
+index 9464cfb..b62937a 100644
 --- a/include/linux/socket.h
 +++ b/include/linux/socket.h
 @@ -296,6 +296,16 @@ struct ucred {
@@ -36697,7 +36937,7 @@
  /*
   *	Lowlevel-APIs (not for driver use!)
 diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
-index 2d0f222..977a906 100644
+index 13070d6..6cd3612 100644
 --- a/include/linux/vmstat.h
 +++ b/include/linux/vmstat.h
 @@ -105,6 +105,7 @@ static inline void vm_events_fold_cpu(int cpu)
@@ -38357,7 +38597,7 @@
  {
  	if (unlikely(skb->sk)) {
 diff --git a/include/net/tcp.h b/include/net/tcp.h
-index 842ac4d..4e8841c 100644
+index 6cfe18b..6fa5f0d 100644
 --- a/include/net/tcp.h
 +++ b/include/net/tcp.h
 @@ -44,6 +44,13 @@
@@ -38395,7 +38635,7 @@
  
  extern atomic_t tcp_memory_allocated;
  extern struct percpu_counter tcp_sockets_allocated;
-@@ -592,7 +605,11 @@ extern u32	__tcp_select_window(struct sock *sk);
+@@ -616,7 +629,11 @@ extern u32	__tcp_select_window(struct sock *sk);
   * to use only the low 32-bits of jiffies and hide the ugly
   * casts with the following macro.
   */
@@ -42221,10 +42461,10 @@
 +}
 diff --git a/kernel/bc/net.c b/kernel/bc/net.c
 new file mode 100644
-index 0000000..2e450f7
+index 0000000..2866ebb
 --- /dev/null
 +++ b/kernel/bc/net.c
-@@ -0,0 +1,1153 @@
+@@ -0,0 +1,1165 @@
 +/*
 + *  linux/kernel/bc/net.c
 + *
@@ -42294,6 +42534,7 @@
 +#include <bc/beancounter.h>
 +#include <bc/net.h>
 +#include <bc/debug.h>
++#include <bc/sock_orphan.h>
 +
 +/* by some reason it is not used currently */
 +#define UB_SOCK_MAINTAIN_WMEMPRESSURE	0
@@ -42328,13 +42569,24 @@
 +static int ub_sock_makewreserv_locked(struct sock *sk,
 +		int bufid, unsigned long size);
 +
-+int __ub_too_many_orphans(struct sock *sk, int count)
++int __ub_too_many_orphans(struct sock *sk, int shift)
 +{
 +	struct user_beancounter *ub;
++	struct percpu_counter *cnt;
 +
 +	if (sock_has_ubc(sk)) {
++		int orphans, limit;
++
 +		ub = top_beancounter(sock_bc(sk)->ub);
-+		if (count >= ub->ub_parms[UB_NUMTCPSOCK].barrier >> 2)
++		limit = ((int)ub->ub_parms[UB_NUMTCPSOCK].barrier) >> 2;
++		cnt = __ub_get_orphan_count_ptr(sk);
++
++		orphans = percpu_counter_read_positive(cnt);
++		if ((orphans << shift) >= limit)
++			return 1;
++
++		orphans = percpu_counter_sum_positive(cnt);
++		if ((orphans << shift) >= limit)
 +			return 1;
 +	}
 +	return 0;
@@ -68157,7 +68409,7 @@
  		    (!cputime_eq(p->utime, cputime_zero) ||
  		     !cputime_eq(p->stime, cputime_zero)))
 diff --git a/kernel/exit.c b/kernel/exit.c
-index 4a0e062..86da6c1 100644
+index 45102e9..36fa8da 100644
 --- a/kernel/exit.c
 +++ b/kernel/exit.c
 @@ -22,6 +22,9 @@
@@ -68329,7 +68581,7 @@
  #ifdef CONFIG_NUMA
  	mpol_put(tsk->mempolicy);
  	tsk->mempolicy = NULL;
-@@ -1630,7 +1666,7 @@ repeat:
+@@ -1629,7 +1665,7 @@ repeat:
  
  		if (wo->wo_flags & __WNOTHREAD)
  			break;
@@ -68338,7 +68590,7 @@
  	read_unlock(&tasklist_lock);
  
  notask:
-@@ -1757,6 +1793,7 @@ SYSCALL_DEFINE4(wait4, pid_t, upid, int __user *, stat_addr,
+@@ -1756,6 +1792,7 @@ SYSCALL_DEFINE4(wait4, pid_t, upid, int __user *, stat_addr,
  	asmlinkage_protect(4, ret, upid, stat_addr, options, ru);
  	return ret;
  }
@@ -69490,10 +69742,10 @@
  
  /*
 diff --git a/kernel/hrtimer.c b/kernel/hrtimer.c
-index 931a4d9..b34a0b9 100644
+index a6e9d00..e908845 100644
 --- a/kernel/hrtimer.c
 +++ b/kernel/hrtimer.c
-@@ -1545,6 +1545,7 @@ out:
+@@ -1554,6 +1554,7 @@ out:
  	destroy_hrtimer_on_stack(&t.timer);
  	return ret;
  }
@@ -70410,7 +70662,7 @@
  }
  
 diff --git a/kernel/printk.c b/kernel/printk.c
-index f38b07f..1041e53 100644
+index f38b07f..75f2691 100644
 --- a/kernel/printk.c
 +++ b/kernel/printk.c
 @@ -31,7 +31,9 @@
@@ -70460,7 +70712,17 @@
  static int __init log_buf_len_setup(char *str)
  {
  	unsigned size = memparse(str, &str);
-@@ -278,6 +294,9 @@ int do_syslog(int type, char __user *buf, int len)
+@@ -182,6 +198,9 @@ static int __init log_buf_len_setup(char *str)
+ 		spin_lock_irqsave(&logbuf_lock, flags);
+ 		log_buf_len = size;
+ 		log_buf = new_log_buf;
++#ifdef CONFIG_VE
++		ve0.log_buf = log_buf;
++#endif
+ 
+ 		offset = start = min(con_start, log_start);
+ 		dest_idx = 0;
+@@ -278,6 +297,9 @@ int do_syslog(int type, char __user *buf, int len)
  	char c;
  	int error = 0;
  
@@ -70470,7 +70732,7 @@
  	error = security_syslog(type);
  	if (error)
  		return error;
-@@ -298,15 +317,15 @@ int do_syslog(int type, char __user *buf, int len)
+@@ -298,15 +320,15 @@ int do_syslog(int type, char __user *buf, int len)
  			error = -EFAULT;
  			goto out;
  		}
@@ -70491,7 +70753,7 @@
  			spin_unlock_irq(&logbuf_lock);
  			error = __put_user(c,buf);
  			buf++;
-@@ -332,15 +351,17 @@ int do_syslog(int type, char __user *buf, int len)
+@@ -332,15 +354,17 @@ int do_syslog(int type, char __user *buf, int len)
  			error = -EFAULT;
  			goto out;
  		}
@@ -70500,11 +70762,11 @@
  		count = len;
 -		if (count > log_buf_len)
 -			count = log_buf_len;
-+		if (count > ve_log_buf_len)
-+			count = ve_log_buf_len;
  		spin_lock_irq(&logbuf_lock);
 -		if (count > logged_chars)
 -			count = logged_chars;
++		if (count > ve_log_buf_len)
++			count = ve_log_buf_len;
 +		if (count > ve_logged_chars)
 +			count = ve_logged_chars;
  		if (do_clear)
@@ -70515,7 +70777,7 @@
  		/*
  		 * __put_user() could sleep, and while we sleep
  		 * printk() could overwrite the messages
-@@ -349,9 +370,9 @@ int do_syslog(int type, char __user *buf, int len)
+@@ -349,9 +373,9 @@ int do_syslog(int type, char __user *buf, int len)
  		 */
  		for (i = 0; i < count && !error; i++) {
  			j = limit-1-i;
@@ -70527,7 +70789,7 @@
  			spin_unlock_irq(&logbuf_lock);
  			error = __put_user(c,&buf[count-1-i]);
  			cond_resched();
-@@ -375,7 +396,7 @@ int do_syslog(int type, char __user *buf, int len)
+@@ -375,7 +399,7 @@ int do_syslog(int type, char __user *buf, int len)
  		}
  		break;
  	case 5:		/* Clear ring buffer */
@@ -70536,7 +70798,7 @@
  		break;
  	case 6:		/* Disable logging to console */
  		if (saved_console_loglevel == -1)
-@@ -392,18 +413,21 @@ int do_syslog(int type, char __user *buf, int len)
+@@ -392,18 +416,21 @@ int do_syslog(int type, char __user *buf, int len)
  		error = -EINVAL;
  		if (len < 1 || len > 8)
  			goto out;
@@ -70561,7 +70823,7 @@
  		break;
  	default:
  		error = -EINVAL;
-@@ -514,14 +538,14 @@ static void call_console_drivers(unsigned start, unsigned end)
+@@ -514,14 +541,14 @@ static void call_console_drivers(unsigned start, unsigned end)
  
  static void emit_log_char(char c)
  {
@@ -70584,7 +70846,7 @@
  }
  
  /*
-@@ -586,6 +610,30 @@ static int have_callable_console(void)
+@@ -586,6 +613,30 @@ static int have_callable_console(void)
   * See the vsnprintf() documentation for format string extensions over C99.
   */
  
@@ -70615,7 +70877,7 @@
  asmlinkage int printk(const char *fmt, ...)
  {
  	va_list args;
-@@ -667,13 +715,14 @@ static inline void printk_delay(void)
+@@ -667,13 +718,14 @@ static inline void printk_delay(void)
  	}
  }
  
@@ -70631,7 +70893,7 @@
  
  	boot_delay_msec();
  	printk_delay();
-@@ -705,6 +754,13 @@ asmlinkage int vprintk(const char *fmt, va_list args)
+@@ -705,6 +757,13 @@ asmlinkage int vprintk(const char *fmt, va_list args)
  	spin_lock(&logbuf_lock);
  	printk_cpu = this_cpu;
  
@@ -70645,7 +70907,7 @@
  	if (recursion_bug) {
  		recursion_bug = 0;
  		strcpy(printk_buf, recursion_bug_msg);
-@@ -788,19 +844,67 @@ asmlinkage int vprintk(const char *fmt, va_list args)
+@@ -788,19 +847,67 @@ asmlinkage int vprintk(const char *fmt, va_list args)
  	 * will release 'logbuf_lock' regardless of whether it
  	 * actually gets the semaphore or not.
  	 */
@@ -70714,7 +70976,7 @@
  #else
  
  static void call_console_drivers(unsigned start, unsigned end)
-@@ -1058,6 +1162,7 @@ void release_console_sem(void)
+@@ -1058,6 +1165,7 @@ void release_console_sem(void)
  		_con_start = con_start;
  		_log_end = log_end;
  		con_start = log_end;		/* Flush */
@@ -70722,7 +70984,7 @@
  		spin_unlock(&logbuf_lock);
  		stop_critical_timings();	/* don't trace print latency */
  		call_console_drivers(_con_start, _log_end);
-@@ -1066,6 +1171,7 @@ void release_console_sem(void)
+@@ -1066,6 +1174,7 @@ void release_console_sem(void)
  	}
  	console_locked = 0;
  	up(&console_sem);
@@ -70730,7 +70992,7 @@
  	spin_unlock_irqrestore(&logbuf_lock, flags);
  	if (wake_klogd)
  		wake_up_klogd();
-@@ -1382,6 +1488,36 @@ int printk_ratelimit(void)
+@@ -1382,6 +1491,36 @@ int printk_ratelimit(void)
  }
  EXPORT_SYMBOL(printk_ratelimit);
  
@@ -70767,7 +71029,7 @@
  /**
   * printk_timed_ratelimit - caller-controlled printk ratelimiting
   * @caller_jiffies: pointer to caller's state
-@@ -1405,3 +1541,65 @@ bool printk_timed_ratelimit(unsigned long *caller_jiffies,
+@@ -1405,3 +1544,65 @@ bool printk_timed_ratelimit(unsigned long *caller_jiffies,
  }
  EXPORT_SYMBOL(printk_timed_ratelimit);
  #endif
@@ -70896,7 +71158,7 @@
  	child = find_task_by_vpid(pid);
  	if (child)
 diff --git a/kernel/sched.c b/kernel/sched.c
-index 152214d..c9f9161 100644
+index a675fd6..d186389 100644
 --- a/kernel/sched.c
 +++ b/kernel/sched.c
 @@ -71,6 +71,8 @@
@@ -71608,7 +71870,7 @@
  
  	read_unlock(&tasklist_lock);
  }
-@@ -9599,6 +10013,7 @@ void __init sched_init(void)
+@@ -9594,6 +10008,7 @@ void __init sched_init(void)
  	update_shares_data = __alloc_percpu(nr_cpu_ids * sizeof(unsigned long),
  					    __alignof__(unsigned long));
  #endif
@@ -71616,7 +71878,7 @@
  	for_each_possible_cpu(i) {
  		struct rq *rq;
  
-@@ -9612,7 +10027,7 @@ void __init sched_init(void)
+@@ -9607,7 +10022,7 @@ void __init sched_init(void)
  #ifdef CONFIG_FAIR_GROUP_SCHED
  		init_task_group.shares = init_task_group_load;
  		INIT_LIST_HEAD(&rq->leaf_cfs_rq_list);
@@ -71625,7 +71887,7 @@
  		/*
  		 * How much cpu bandwidth does init_task_group get?
  		 *
-@@ -9658,7 +10073,7 @@ void __init sched_init(void)
+@@ -9653,7 +10068,7 @@ void __init sched_init(void)
  		rq->rt.rt_runtime = def_rt_bandwidth.rt_runtime;
  #ifdef CONFIG_RT_GROUP_SCHED
  		INIT_LIST_HEAD(&rq->leaf_rt_rq_list);
@@ -71634,7 +71896,7 @@
  		init_tg_rt_entry(&init_task_group, &rq->rt, NULL, i, 1, NULL);
  #elif defined CONFIG_USER_SCHED
  		init_tg_rt_entry(&root_task_group, &rq->rt, NULL, i, 0, NULL);
-@@ -9724,6 +10139,7 @@ void __init sched_init(void)
+@@ -9719,6 +10134,7 @@ void __init sched_init(void)
  	 * During early bootup we pretend to be a normal task:
  	 */
  	current->sched_class = &fair_sched_class;
@@ -71642,7 +71904,7 @@
  
  	/* Allocate the nohz_cpu_mask if CONFIG_CPUMASK_OFFSTACK */
  	zalloc_cpumask_var(&nohz_cpu_mask, GFP_NOWAIT);
-@@ -9802,7 +10218,7 @@ void normalize_rt_tasks(void)
+@@ -9797,7 +10213,7 @@ void normalize_rt_tasks(void)
  	struct rq *rq;
  
  	read_lock_irqsave(&tasklist_lock, flags);
@@ -71651,7 +71913,7 @@
  		/*
  		 * Only normalize user tasks:
  		 */
-@@ -9833,7 +10249,7 @@ void normalize_rt_tasks(void)
+@@ -9828,7 +10244,7 @@ void normalize_rt_tasks(void)
  
  		__task_rq_unlock(rq);
  		spin_unlock(&p->pi_lock);
@@ -71660,7 +71922,7 @@
  
  	read_unlock_irqrestore(&tasklist_lock, flags);
  }
-@@ -10279,10 +10695,10 @@ static inline int tg_has_rt_tasks(struct task_group *tg)
+@@ -10274,10 +10690,10 @@ static inline int tg_has_rt_tasks(struct task_group *tg)
  {
  	struct task_struct *g, *p;
  
@@ -71956,7 +72218,7 @@
  	if (!in_interrupt() && local_softirq_pending())
  		invoke_softirq();
 diff --git a/kernel/sys.c b/kernel/sys.c
-index 26e4b8a..d182032 100644
+index 440ca69..4e24efc 100644
 --- a/kernel/sys.c
 +++ b/kernel/sys.c
 @@ -10,6 +10,8 @@
@@ -72172,7 +72434,7 @@
  }
  
  /*
-@@ -1132,7 +1277,7 @@ SYSCALL_DEFINE2(sethostname, char __user *, name, int, len)
+@@ -1134,7 +1279,7 @@ SYSCALL_DEFINE2(sethostname, char __user *, name, int, len)
  	int errno;
  	char tmp[__NEW_UTS_LEN];
  
@@ -72181,7 +72443,7 @@
  		return -EPERM;
  	if (len < 0 || len > __NEW_UTS_LEN)
  		return -EINVAL;
-@@ -1181,7 +1326,7 @@ SYSCALL_DEFINE2(setdomainname, char __user *, name, int, len)
+@@ -1183,7 +1328,7 @@ SYSCALL_DEFINE2(setdomainname, char __user *, name, int, len)
  	int errno;
  	char tmp[__NEW_UTS_LEN];
  
@@ -76529,7 +76791,7 @@
  
  		if (!task_early_kill(tsk))
 diff --git a/mm/memory.c b/mm/memory.c
-index 194dc17..8bb23cc 100644
+index 53c1da0..2d1b2df 100644
 --- a/mm/memory.c
 +++ b/mm/memory.c
 @@ -42,6 +42,9 @@
@@ -76923,7 +77185,7 @@
  	page_cache_release(page);
  	return ret;
  }
-@@ -2668,6 +2754,7 @@ static int do_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma,
+@@ -2675,6 +2761,7 @@ static int do_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma,
  	struct page *page;
  	spinlock_t *ptl;
  	pte_t entry;
@@ -76931,7 +77193,7 @@
  
  	pte_unmap(page_table);
  
-@@ -2686,6 +2773,9 @@ static int do_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma,
+@@ -2693,6 +2780,9 @@ static int do_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma,
  	}
  
  	/* Allocate our own private page. */
@@ -76941,7 +77203,7 @@
  	if (unlikely(anon_vma_prepare(vma)))
  		goto oom;
  	page = alloc_zeroed_user_highpage_movable(vma, address);
-@@ -2706,12 +2796,15 @@ static int do_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma,
+@@ -2713,12 +2803,15 @@ static int do_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma,
  
  	inc_mm_counter(mm, anon_rss);
  	page_add_new_anon_rmap(page, vma, address);
@@ -76957,7 +77219,7 @@
  	pte_unmap_unlock(page_table, ptl);
  	return 0;
  release:
-@@ -2721,6 +2814,8 @@ release:
+@@ -2728,6 +2821,8 @@ release:
  oom_free_page:
  	page_cache_release(page);
  oom:
@@ -76966,7 +77228,7 @@
  	return VM_FAULT_OOM;
  }
  
-@@ -2748,6 +2843,7 @@ static int __do_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+@@ -2755,6 +2850,7 @@ static int __do_fault(struct mm_struct *mm, struct vm_area_struct *vma,
  	int anon = 0;
  	int charged = 0;
  	struct page *dirty_page = NULL;
@@ -76974,7 +77236,7 @@
  	struct vm_fault vmf;
  	int ret;
  	int page_mkwrite = 0;
-@@ -2757,9 +2853,13 @@ static int __do_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+@@ -2764,9 +2860,13 @@ static int __do_fault(struct mm_struct *mm, struct vm_area_struct *vma,
  	vmf.flags = flags;
  	vmf.page = NULL;
  
@@ -76989,7 +77251,7 @@
  
  	if (unlikely(PageHWPoison(vmf.page))) {
  		if (ret & VM_FAULT_LOCKED)
-@@ -2853,6 +2953,8 @@ static int __do_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+@@ -2860,6 +2960,8 @@ static int __do_fault(struct mm_struct *mm, struct vm_area_struct *vma,
  	 */
  	/* Only go through if we didn't race with anybody else... */
  	if (likely(pte_same(*page_table, orig_pte))) {
@@ -76998,7 +77260,7 @@
  		flush_icache_page(vma, page);
  		entry = mk_pte(page, vma->vm_page_prot);
  		if (flags & FAULT_FLAG_WRITE)
-@@ -2869,6 +2971,25 @@ static int __do_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+@@ -2876,6 +2978,25 @@ static int __do_fault(struct mm_struct *mm, struct vm_area_struct *vma,
  			}
  		}
  		set_pte_at(mm, address, page_table, entry);
@@ -77024,7 +77286,7 @@
  
  		/* no need to invalidate: a not-present page won't be cached */
  		update_mmu_cache(vma, address, entry);
-@@ -2908,6 +3029,9 @@ out:
+@@ -2915,6 +3036,9 @@ out:
  			page_cache_release(vmf.page);
  	}
  
@@ -77034,7 +77296,7 @@
  	return ret;
  
  unwritable_page:
-@@ -3035,6 +3159,27 @@ int handle_mm_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+@@ -3042,6 +3166,27 @@ int handle_mm_fault(struct mm_struct *mm, struct vm_area_struct *vma,
  	pmd_t *pmd;
  	pte_t *pte;
  
@@ -77062,7 +77324,7 @@
  	__set_current_state(TASK_RUNNING);
  
  	count_vm_event(PGFAULT);
-@@ -3079,6 +3224,8 @@ int __pud_alloc(struct mm_struct *mm, pgd_t *pgd, unsigned long address)
+@@ -3086,6 +3231,8 @@ int __pud_alloc(struct mm_struct *mm, pgd_t *pgd, unsigned long address)
  }
  #endif /* __PAGETABLE_PUD_FOLDED */
  
@@ -77071,7 +77333,7 @@
  #ifndef __PAGETABLE_PMD_FOLDED
  /*
   * Allocate page middle directory.
-@@ -3109,6 +3256,8 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address)
+@@ -3116,6 +3263,8 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address)
  }
  #endif /* __PAGETABLE_PMD_FOLDED */
  
@@ -77080,7 +77342,7 @@
  int make_pages_present(unsigned long addr, unsigned long end)
  {
  	int ret, len, write;
-@@ -3128,6 +3277,8 @@ int make_pages_present(unsigned long addr, unsigned long end)
+@@ -3135,6 +3284,8 @@ int make_pages_present(unsigned long addr, unsigned long end)
  	return ret == len ? 0 : -EFAULT;
  }
  
@@ -77119,7 +77381,7 @@
  	gfp_temp = gfp_mask & ~(__GFP_WAIT|__GFP_IO);
  
 diff --git a/mm/mlock.c b/mm/mlock.c
-index 380ea89..59190a0 100644
+index 2d846cf..9cefc84 100644
 --- a/mm/mlock.c
 +++ b/mm/mlock.c
 @@ -18,6 +18,7 @@
@@ -77130,7 +77392,7 @@
  
  #include "internal.h"
  
-@@ -328,12 +329,14 @@ no_mlock:
+@@ -322,12 +323,14 @@ no_mlock:
   * and re-mlocked by try_to_{munlock|unmap} before we unmap and
   * free them.  This will result in freeing mlocked pages.
   */
@@ -77147,7 +77409,7 @@
  	vma->vm_flags &= ~VM_LOCKED;
  
  	for (addr = start; addr < end; addr += PAGE_SIZE) {
-@@ -393,6 +396,12 @@ static int mlock_fixup(struct vm_area_struct *vma, struct vm_area_struct **prev,
+@@ -387,6 +390,12 @@ static int mlock_fixup(struct vm_area_struct *vma, struct vm_area_struct **prev,
  		goto out;	/* don't set VM_LOCKED,  don't count */
  	}
  
@@ -77160,7 +77422,7 @@
  	pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT);
  	*prev = vma_merge(mm, *prev, start, end, newflags, vma->anon_vma,
  			  vma->vm_file, pgoff, vma_policy(vma));
-@@ -404,13 +413,13 @@ static int mlock_fixup(struct vm_area_struct *vma, struct vm_area_struct **prev,
+@@ -398,13 +407,13 @@ static int mlock_fixup(struct vm_area_struct *vma, struct vm_area_struct **prev,
  	if (start != vma->vm_start) {
  		ret = split_vma(mm, vma, start, 1);
  		if (ret)
@@ -77176,7 +77438,7 @@
  	}
  
  success:
-@@ -440,6 +449,11 @@ success:
+@@ -434,6 +443,11 @@ success:
  out:
  	*prev = vma;
  	return ret;
@@ -77188,7 +77450,7 @@
  }
  
  static int do_mlock(unsigned long start, size_t len, int on)
-@@ -518,6 +532,7 @@ SYSCALL_DEFINE2(mlock, unsigned long, start, size_t, len)
+@@ -512,6 +526,7 @@ SYSCALL_DEFINE2(mlock, unsigned long, start, size_t, len)
  	up_write(&current->mm->mmap_sem);
  	return error;
  }
@@ -77196,7 +77458,7 @@
  
  SYSCALL_DEFINE2(munlock, unsigned long, start, size_t, len)
  {
-@@ -530,6 +545,7 @@ SYSCALL_DEFINE2(munlock, unsigned long, start, size_t, len)
+@@ -524,6 +539,7 @@ SYSCALL_DEFINE2(munlock, unsigned long, start, size_t, len)
  	up_write(&current->mm->mmap_sem);
  	return ret;
  }
@@ -77205,7 +77467,7 @@
  static int do_mlockall(int flags)
  {
 diff --git a/mm/mmap.c b/mm/mmap.c
-index b309c75..a3ef2d2 100644
+index 866a666..7ff61c2 100644
 --- a/mm/mmap.c
 +++ b/mm/mmap.c
 @@ -29,6 +29,7 @@
@@ -77358,7 +77620,7 @@
  }
  
  #if defined(CONFIG_STACK_GROWSUP) || defined(CONFIG_IA64)
-@@ -1882,6 +1933,7 @@ int split_vma(struct mm_struct * mm, struct vm_area_struct * vma,
+@@ -1879,6 +1930,7 @@ int split_vma(struct mm_struct * mm, struct vm_area_struct * vma,
  
  	return 0;
  }
@@ -77366,7 +77628,7 @@
  
  /* Munmap is split into 2 main parts -- this part which finds
   * what needs doing, and the areas themselves, which do the
-@@ -1989,7 +2041,7 @@ static inline void verify_mm_writelocked(struct mm_struct *mm)
+@@ -1986,7 +2038,7 @@ static inline void verify_mm_writelocked(struct mm_struct *mm)
   *  anonymous maps.  eventually we may be able to do some
   *  brk-specific accounting here.
   */
@@ -77375,7 +77637,7 @@
  {
  	struct mm_struct * mm = current->mm;
  	struct vm_area_struct * vma, * prev;
-@@ -2049,8 +2101,11 @@ unsigned long do_brk(unsigned long addr, unsigned long len)
+@@ -2046,8 +2098,11 @@ unsigned long do_brk(unsigned long addr, unsigned long len)
  	if (mm->map_count > sysctl_max_map_count)
  		return -ENOMEM;
  
@@ -77388,7 +77650,7 @@
  
  	/* Can we just expand an old private anonymous mapping? */
  	vma = vma_merge(mm, prev, addr, addr + len, flags,
-@@ -2061,11 +2116,10 @@ unsigned long do_brk(unsigned long addr, unsigned long len)
+@@ -2058,11 +2113,10 @@ unsigned long do_brk(unsigned long addr, unsigned long len)
  	/*
  	 * create a vma struct for an anonymous mapping
  	 */
@@ -77404,7 +77666,7 @@
  
  	vma->vm_mm = mm;
  	vma->vm_start = addr;
-@@ -2081,8 +2135,19 @@ out:
+@@ -2078,8 +2132,19 @@ out:
  			mm->locked_vm += (len >> PAGE_SHIFT);
  	}
  	return addr;
@@ -77424,7 +77686,7 @@
  EXPORT_SYMBOL(do_brk);
  
  /* Release all mmaps. */
-@@ -2275,10 +2340,11 @@ static void special_mapping_close(struct vm_area_struct *vma)
+@@ -2272,10 +2337,11 @@ static void special_mapping_close(struct vm_area_struct *vma)
  {
  }
  
@@ -77438,7 +77700,7 @@
  /*
   * Called with mm->mmap_sem held for writing.
 diff --git a/mm/mmzone.c b/mm/mmzone.c
-index f5b7d17..ee9dfe1 100644
+index e35bfb8..e0d5174 100644
 --- a/mm/mmzone.c
 +++ b/mm/mmzone.c
 @@ -14,6 +14,7 @@ struct pglist_data *first_online_pgdat(void)
@@ -77958,7 +78220,7 @@
  		}
  		return 0;
 diff --git a/mm/page_alloc.c b/mm/page_alloc.c
-index 36992b6..cd0501c 100644
+index 902e5fc..1c562d4 100644
 --- a/mm/page_alloc.c
 +++ b/mm/page_alloc.c
 @@ -54,6 +54,9 @@
@@ -77987,7 +78249,7 @@
  	if (page->flags & PAGE_FLAGS_CHECK_AT_PREP)
  		page->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
  	return 0;
-@@ -601,6 +606,7 @@ static void __free_pages_ok(struct page *page, unsigned int order)
+@@ -602,6 +607,7 @@ static void __free_pages_ok(struct page *page, unsigned int order)
  	arch_free_page(page, order);
  	kernel_map_pages(page, 1 << order, 0);
  
@@ -77995,7 +78257,7 @@
  	local_irq_save(flags);
  	if (unlikely(wasMlocked))
  		free_page_mlock(page);
-@@ -1102,6 +1108,7 @@ static void free_hot_cold_page(struct page *page, int cold)
+@@ -1103,6 +1109,7 @@ static void free_hot_cold_page(struct page *page, int cold)
  	pcp = &zone_pcp(zone, get_cpu())->pcp;
  	migratetype = get_pageblock_migratetype(page);
  	set_page_private(page, migratetype);
@@ -78003,7 +78265,7 @@
  	local_irq_save(flags);
  	if (unlikely(wasMlocked))
  		free_page_mlock(page);
-@@ -1783,6 +1790,8 @@ gfp_to_alloc_flags(gfp_t gfp_mask)
+@@ -1796,6 +1803,8 @@ gfp_to_alloc_flags(gfp_t gfp_mask)
  	return alloc_flags;
  }
  
@@ -78012,7 +78274,7 @@
  static inline struct page *
  __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
  	struct zonelist *zonelist, enum zone_type high_zoneidx,
-@@ -1904,7 +1913,7 @@ rebalance:
+@@ -1917,7 +1926,7 @@ rebalance:
  	}
  
  nopage:
@@ -78021,7 +78283,7 @@
  		printk(KERN_WARNING "%s: page allocation failure."
  			" order:%d, mode:0x%x\n",
  			p->comm, order, gfp_mask);
-@@ -1919,6 +1928,29 @@ got_pg:
+@@ -1932,6 +1941,29 @@ got_pg:
  
  }
  
@@ -78051,7 +78313,7 @@
  /*
   * This is the 'heart' of the zoned buddy allocator.
   */
-@@ -1930,6 +1962,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
+@@ -1943,6 +1975,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
  	struct zone *preferred_zone;
  	struct page *page;
  	int migratetype = allocflags_to_migratetype(gfp_mask);
@@ -78059,7 +78321,7 @@
  
  	gfp_mask &= gfp_allowed_mask;
  
-@@ -1953,6 +1986,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
+@@ -1966,6 +1999,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
  	if (!preferred_zone)
  		return NULL;
  
@@ -78067,7 +78329,7 @@
  	/* First allocation attempt */
  	page = get_page_from_freelist(gfp_mask|__GFP_HARDWALL, nodemask, order,
  			zonelist, high_zoneidx, ALLOC_WMARK_LOW|ALLOC_CPUSET,
-@@ -1962,6 +1996,12 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
+@@ -1975,6 +2009,12 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
  				zonelist, high_zoneidx, nodemask,
  				preferred_zone, migratetype);
  
@@ -80639,7 +80901,7 @@
  }
  
 diff --git a/mm/vmstat.c b/mm/vmstat.c
-index c81321f..44bf18f 100644
+index 42d76c6..d6e4281 100644
 --- a/mm/vmstat.c
 +++ b/mm/vmstat.c
 @@ -15,6 +15,7 @@
@@ -80671,7 +80933,7 @@
  /*
   * Accumulate the vm event counters across all CPUs.
   * The result is unavoidably approximate - it can change
-@@ -800,30 +815,40 @@ static void *vmstat_start(struct seq_file *m, loff_t *pos)
+@@ -813,30 +828,40 @@ static void *vmstat_start(struct seq_file *m, loff_t *pos)
  	unsigned long *v;
  #ifdef CONFIG_VM_EVENT_COUNTERS
  	unsigned long *e;
@@ -80725,7 +80987,7 @@
  	return v + *pos;
  }
  
-@@ -942,7 +967,7 @@ static int __init setup_vmstat(void)
+@@ -955,7 +980,7 @@ static int __init setup_vmstat(void)
  #ifdef CONFIG_PROC_FS
  	proc_create("buddyinfo", S_IRUGO, NULL, &fragmentation_file_operations);
  	proc_create("pagetypeinfo", S_IRUGO, NULL, &pagetypeinfo_file_ops);
@@ -81353,7 +81615,7 @@
  	else
  		set_bit(SOCK_ASYNC_NOSPACE, &sk->sk_socket->flags);
 diff --git a/net/core/dev.c b/net/core/dev.c
-index 915d0ae..57a9f40 100644
+index 915d0ae..7f18124 100644
 --- a/net/core/dev.c
 +++ b/net/core/dev.c
 @@ -130,6 +130,9 @@
@@ -81387,7 +81649,27 @@
  /* Device list insertion */
  static int list_netdevice(struct net_device *dev)
  {
-@@ -1697,6 +1686,24 @@ static int dev_gso_segment(struct sk_buff *skb)
+@@ -922,15 +911,10 @@ int dev_change_name(struct net_device *dev, const char *newname)
+ 		strlcpy(dev->name, newname, IFNAMSIZ);
+ 
+ rollback:
+-	/* For now only devices in the initial network namespace
+-	 * are in sysfs.
+-	 */
+-	if (net == &init_net) {
+-		ret = device_rename(&dev->dev, dev->name);
+-		if (ret) {
+-			memcpy(dev->name, oldname, IFNAMSIZ);
+-			return ret;
+-		}
++	ret = device_rename(&dev->dev, dev->name);
++	if (ret) {
++		memcpy(dev->name, oldname, IFNAMSIZ);
++		return ret;
+ 	}
+ 
+ 	write_lock_bh(&dev_base_lock);
+@@ -1697,6 +1681,24 @@ static int dev_gso_segment(struct sk_buff *skb)
  	return 0;
  }
  
@@ -81412,7 +81694,7 @@
  int dev_hard_start_xmit(struct sk_buff *skb, struct net_device *dev,
  			struct netdev_queue *txq)
  {
-@@ -1721,6 +1728,8 @@ int dev_hard_start_xmit(struct sk_buff *skb, struct net_device *dev,
+@@ -1721,6 +1723,8 @@ int dev_hard_start_xmit(struct sk_buff *skb, struct net_device *dev,
  		if (dev->priv_flags & IFF_XMIT_DST_RELEASE)
  			skb_dst_drop(skb);
  
@@ -81421,17 +81703,24 @@
  		rc = ops->ndo_start_xmit(skb, dev);
  		if (rc == NETDEV_TX_OK)
  			txq_trans_update(txq);
-@@ -1747,6 +1756,9 @@ gso:
+@@ -1747,6 +1751,16 @@ gso:
  
  		skb->next = nskb->next;
  		nskb->next = NULL;
 +
-+		bridge_hard_start_xmit(skb, dev);
++		/*
++		 * If device doesnt need nskb->dst, release it right now while
++		 * its hot in this cpu cache
++		 */
++		if (dev->priv_flags & IFF_XMIT_DST_RELEASE)
++			skb_dst_drop(nskb);
++
++		bridge_hard_start_xmit(nskb, dev);
 +
  		rc = ops->ndo_start_xmit(nskb, dev);
  		if (unlikely(rc != NETDEV_TX_OK)) {
  			nskb->next = skb->next;
-@@ -2288,6 +2300,7 @@ int netif_receive_skb(struct sk_buff *skb)
+@@ -2288,6 +2302,7 @@ int netif_receive_skb(struct sk_buff *skb)
  	struct net_device *null_or_orig;
  	int ret = NET_RX_DROP;
  	__be16 type;
@@ -81439,7 +81728,7 @@
  
  	if (!skb->tstamp.tv64)
  		net_timestamp(skb);
-@@ -2317,6 +2330,16 @@ int netif_receive_skb(struct sk_buff *skb)
+@@ -2317,6 +2332,16 @@ int netif_receive_skb(struct sk_buff *skb)
  	skb_reset_transport_header(skb);
  	skb->mac_len = skb->network_header - skb->mac_header;
  
@@ -81456,7 +81745,7 @@
  	pt_prev = NULL;
  
  	rcu_read_lock();
-@@ -2375,6 +2398,7 @@ ncls:
+@@ -2375,6 +2400,7 @@ ncls:
  
  out:
  	rcu_read_unlock();
@@ -81464,7 +81753,7 @@
  	return ret;
  }
  EXPORT_SYMBOL(netif_receive_skb);
-@@ -3394,8 +3418,13 @@ static int __dev_set_promiscuity(struct net_device *dev, int inc)
+@@ -3394,8 +3420,13 @@ static int __dev_set_promiscuity(struct net_device *dev, int inc)
  			return -EOVERFLOW;
  		}
  	}
@@ -81480,7 +81769,7 @@
  		       dev->name, (dev->flags & IFF_PROMISC) ? "entered" :
  							       "left");
  		if (audit_enabled) {
-@@ -4547,16 +4576,25 @@ int dev_ioctl(struct net *net, unsigned int cmd, void __user *arg)
+@@ -4547,16 +4578,25 @@ int dev_ioctl(struct net *net, unsigned int cmd, void __user *arg)
  	 *	- require strict serialization.
  	 *	- do not return a value
  	 */
@@ -81509,7 +81798,7 @@
  	case SIOCSMIIREG:
  	case SIOCBONDENSLAVE:
  	case SIOCBONDRELEASE:
-@@ -4619,12 +4657,11 @@ int dev_ioctl(struct net *net, unsigned int cmd, void __user *arg)
+@@ -4619,12 +4659,11 @@ int dev_ioctl(struct net *net, unsigned int cmd, void __user *arg)
   */
  static int dev_new_index(struct net *net)
  {
@@ -81526,7 +81815,7 @@
  	}
  }
  
-@@ -4779,6 +4816,10 @@ int register_netdevice(struct net_device *dev)
+@@ -4779,6 +4818,10 @@ int register_netdevice(struct net_device *dev)
  	BUG_ON(dev->reg_state != NETREG_UNINITIALIZED);
  	BUG_ON(!net);
  
@@ -81537,7 +81826,7 @@
  	spin_lock_init(&dev->addr_list_lock);
  	netdev_set_addr_lockdep_class(dev);
  	netdev_init_queue_locks(dev);
-@@ -4849,6 +4890,10 @@ int register_netdevice(struct net_device *dev)
+@@ -4849,6 +4892,10 @@ int register_netdevice(struct net_device *dev)
  
  	set_bit(__LINK_STATE_PRESENT, &dev->state);
  
@@ -81548,7 +81837,7 @@
  	dev_init_scheduler(dev);
  	dev_hold(dev);
  	list_netdevice(dev);
-@@ -5029,12 +5074,14 @@ static void netdev_wait_allrefs(struct net_device *dev)
+@@ -5029,12 +5076,14 @@ static void netdev_wait_allrefs(struct net_device *dev)
  void netdev_run_todo(void)
  {
  	struct list_head list;
@@ -81563,7 +81852,7 @@
  	while (!list_empty(&list)) {
  		struct net_device *dev
  			= list_entry(list.next, struct net_device, todo_list);
-@@ -5047,6 +5094,7 @@ void netdev_run_todo(void)
+@@ -5047,6 +5096,7 @@ void netdev_run_todo(void)
  			continue;
  		}
  
@@ -81571,7 +81860,7 @@
  		dev->reg_state = NETREG_UNREGISTERED;
  
  		on_each_cpu(flush_backlog, dev, 1);
-@@ -5059,12 +5107,21 @@ void netdev_run_todo(void)
+@@ -5059,12 +5109,21 @@ void netdev_run_todo(void)
  		WARN_ON(dev->ip6_ptr);
  		WARN_ON(dev->dn_ptr);
  
@@ -81593,7 +81882,7 @@
  }
  
  /**
-@@ -5147,13 +5204,13 @@ struct net_device *alloc_netdev_mq(int sizeof_priv, const char *name,
+@@ -5147,13 +5206,13 @@ struct net_device *alloc_netdev_mq(int sizeof_priv, const char *name,
  	/* ensure 32-byte alignment of whole construct */
  	alloc_size += NETDEV_ALIGN - 1;
  
@@ -81609,7 +81898,7 @@
  	if (!tx) {
  		printk(KERN_ERR "alloc_netdev: Unable to allocate "
  		       "tx qdiscs.\n");
-@@ -5296,11 +5353,18 @@ EXPORT_SYMBOL(unregister_netdev);
+@@ -5296,11 +5355,18 @@ EXPORT_SYMBOL(unregister_netdev);
   *	Callers must hold the rtnl semaphore.
   */
  
@@ -81629,7 +81918,23 @@
  
  	ASSERT_RTNL();
  
-@@ -5360,6 +5424,11 @@ int dev_change_net_namespace(struct net_device *dev, struct net *net, const char
+@@ -5309,15 +5375,6 @@ int dev_change_net_namespace(struct net_device *dev, struct net *net, const char
+ 	if (dev->features & NETIF_F_NETNS_LOCAL)
+ 		goto out;
+ 
+-#ifdef CONFIG_SYSFS
+-	/* Don't allow real devices to be moved when sysfs
+-	 * is enabled.
+-	 */
+-	err = -EINVAL;
+-	if (dev->dev.parent)
+-		goto out;
+-#endif
+-
+ 	/* Ensure the device has been registrered */
+ 	err = -EINVAL;
+ 	if (dev->reg_state != NETREG_REGISTERED)
+@@ -5360,6 +5417,11 @@ int dev_change_net_namespace(struct net_device *dev, struct net *net, const char
  	err = -ENODEV;
  	unlist_netdevice(dev);
  
@@ -81641,7 +81946,7 @@
  	synchronize_net();
  
  	/* Shutdown queueing discipline. */
-@@ -5368,7 +5437,9 @@ int dev_change_net_namespace(struct net_device *dev, struct net *net, const char
+@@ -5368,7 +5430,9 @@ int dev_change_net_namespace(struct net_device *dev, struct net *net, const char
  	/* Notify protocols, that we are about to destroy
  	   this device. They should clean all the things.
  	*/
@@ -81651,7 +81956,7 @@
  
  	/*
  	 *	Flush the unicast and multicast chains
-@@ -5376,7 +5447,9 @@ int dev_change_net_namespace(struct net_device *dev, struct net *net, const char
+@@ -5376,7 +5440,9 @@ int dev_change_net_namespace(struct net_device *dev, struct net *net, const char
  	dev_unicast_flush(dev);
  	dev_addr_discard(dev);
  
@@ -81661,7 +81966,7 @@
  
  	/* Actually switch the network namespace */
  	dev_net_set(dev, net);
-@@ -5394,14 +5467,18 @@ int dev_change_net_namespace(struct net_device *dev, struct net *net, const char
+@@ -5394,14 +5460,18 @@ int dev_change_net_namespace(struct net_device *dev, struct net *net, const char
  	}
  
  	/* Fixup kobjects */
@@ -81680,7 +81985,7 @@
  
  	/*
  	 *	Prevent userspace races by waiting until the network
-@@ -5416,6 +5493,14 @@ out:
+@@ -5416,6 +5486,14 @@ out:
  }
  EXPORT_SYMBOL_GPL(dev_change_net_namespace);
  
@@ -81695,7 +82000,7 @@
  static int dev_cpu_callback(struct notifier_block *nfb,
  			    unsigned long action,
  			    void *ocpu)
-@@ -5507,7 +5592,7 @@ static struct hlist_head *netdev_create_hash(void)
+@@ -5507,7 +5585,7 @@ static struct hlist_head *netdev_create_hash(void)
  	int i;
  	struct hlist_head *hash;
  
@@ -81704,7 +82009,7 @@
  	if (hash != NULL)
  		for (i = 0; i < NETDEV_HASHENTRIES; i++)
  			INIT_HLIST_HEAD(&hash[i]);
-@@ -5701,3 +5786,32 @@ static int __init initialize_hashrnd(void)
+@@ -5701,3 +5779,32 @@ static int __init initialize_hashrnd(void)
  
  late_initcall_sync(initialize_hashrnd);
  
@@ -81750,7 +82055,7 @@
  		for (dst = dst_busy_list; dst; dst = dst->next) {
  			last = dst;
 diff --git a/net/core/ethtool.c b/net/core/ethtool.c
-index 5aef51e..b7d4d7ff 100644
+index 450862e..f0ffc06 100644
 --- a/net/core/ethtool.c
 +++ b/net/core/ethtool.c
 @@ -975,7 +975,7 @@ int dev_ethtool(struct net *net, struct ifreq *ifr)
@@ -81923,7 +82228,7 @@
  
  			dev_put(dev);
 diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
-index d5617d4..c70f2a2 100644
+index d5617d4..abbb0db 100644
 --- a/net/core/net-sysfs.c
 +++ b/net/core/net-sysfs.c
 @@ -268,6 +268,27 @@ static struct device_attribute net_class_attributes[] = {
@@ -81954,7 +82259,17 @@
  /* Show a given an attribute in the statistics group */
  static ssize_t netstat_show(const struct device *d,
  			    struct device_attribute *attr, char *buf,
-@@ -462,7 +483,7 @@ static void netdev_release(struct device *d)
+@@ -430,9 +451,6 @@ static int netdev_uevent(struct device *d, struct kobj_uevent_env *env)
+ 	struct net_device *dev = to_net_dev(d);
+ 	int retval;
+ 
+-	if (!net_eq(dev_net(dev), &init_net))
+-		return 0;
+-
+ 	/* pass interface to uevent. */
+ 	retval = add_uevent_var(env, "INTERFACE=%s", dev->name);
+ 	if (retval)
+@@ -462,7 +480,7 @@ static void netdev_release(struct device *d)
  	kfree((char *)dev - dev->padded);
  }
  
@@ -81963,7 +82278,7 @@
  	.name = "net",
  	.dev_release = netdev_release,
  #ifdef CONFIG_SYSFS
-@@ -472,6 +493,13 @@ static struct class net_class = {
+@@ -472,6 +490,13 @@ static struct class net_class = {
  	.dev_uevent = netdev_uevent,
  #endif
  };
@@ -81977,7 +82292,17 @@
  
  /* Delete sysfs entries but hold kobject reference until after all
   * netdev references are gone.
-@@ -494,7 +522,7 @@ int netdev_register_kobject(struct net_device *net)
+@@ -482,9 +507,6 @@ void netdev_unregister_kobject(struct net_device * net)
+ 
+ 	kobject_get(&dev->kobj);
+ 
+-	if (dev_net(net) != &init_net)
+-		return;
+-
+ 	device_del(dev);
+ }
+ 
+@@ -494,7 +516,7 @@ int netdev_register_kobject(struct net_device *net)
  	struct device *dev = &(net->dev);
  	const struct attribute_group **groups = net->sysfs_groups;
  
@@ -81986,7 +82311,7 @@
  	dev->platform_data = net;
  	dev->groups = groups;
  
-@@ -509,9 +537,6 @@ int netdev_register_kobject(struct net_device *net)
+@@ -509,9 +531,6 @@ int netdev_register_kobject(struct net_device *net)
  #endif
  #endif /* CONFIG_SYSFS */
  
@@ -81996,7 +82321,7 @@
  	return device_add(dev);
  }
  
-@@ -534,7 +559,15 @@ void netdev_initialize_kobject(struct net_device *net)
+@@ -534,7 +553,15 @@ void netdev_initialize_kobject(struct net_device *net)
  	device_initialize(device);
  }
  
@@ -82200,7 +82525,7 @@
  		for (i=fpl->count-1; i>=0; i--)
  			get_file(fpl->fp[i]);
 diff --git a/net/core/skbuff.c b/net/core/skbuff.c
-index ec85681..b8865de 100644
+index 283f441..c680a7f 100644
 --- a/net/core/skbuff.c
 +++ b/net/core/skbuff.c
 @@ -67,6 +67,7 @@
@@ -82561,7 +82886,7 @@
  						  NULL);
  			if (prot->twsk_prot->twsk_slab == NULL)
 diff --git a/net/core/stream.c b/net/core/stream.c
-index a37debf..af5873a 100644
+index e48c85f..dfdded0 100644
 --- a/net/core/stream.c
 +++ b/net/core/stream.c
 @@ -112,8 +112,10 @@ EXPORT_SYMBOL(sk_stream_wait_close);
@@ -83273,10 +83598,10 @@
  
  	/* Point into the IP datagram, just past the header. */
 diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
-index 4d50daa..1f681c7 100644
+index 2ef9026..0c9b367 100644
 --- a/net/ipv4/ip_output.c
 +++ b/net/ipv4/ip_output.c
-@@ -1362,12 +1362,13 @@ void ip_send_reply(struct sock *sk, struct sk_buff *skb, struct ip_reply_arg *ar
+@@ -1369,12 +1369,13 @@ void ip_send_reply(struct sock *sk, struct sk_buff *skb, struct ip_reply_arg *ar
  		char			data[40];
  	} replyopts;
  	struct ipcm_cookie ipc;
@@ -83291,7 +83616,7 @@
  	daddr = ipc.addr = rt->rt_src;
  	ipc.opt = NULL;
  	ipc.shtx.flags = 0;
-@@ -1383,7 +1384,7 @@ void ip_send_reply(struct sock *sk, struct sk_buff *skb, struct ip_reply_arg *ar
+@@ -1390,7 +1391,7 @@ void ip_send_reply(struct sock *sk, struct sk_buff *skb, struct ip_reply_arg *ar
  		struct flowi fl = { .oif = arg->bound_dev_if,
  				    .nl_u = { .ip4_u =
  					      { .daddr = daddr,
@@ -84402,7 +84727,7 @@
  	local_bh_enable();
  
 diff --git a/net/ipv4/route.c b/net/ipv4/route.c
-index 5b1050a..db496b6 100644
+index 6c8f6c9..ddbd93e 100644
 --- a/net/ipv4/route.c
 +++ b/net/ipv4/route.c
 @@ -69,6 +69,7 @@
@@ -84527,7 +84852,7 @@
  		.procname	= "rt_cache_rebuild_count",
  		.data		= &init_net.ipv4.sysctl_rt_cache_rebuild_count,
 diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
-index f1813bc..f2d3769 100644
+index 4678308..256bcc7 100644
 --- a/net/ipv4/tcp.c
 +++ b/net/ipv4/tcp.c
 @@ -272,6 +272,10 @@
@@ -84549,8 +84874,8 @@
  
  	sock_poll_wait(file, sk->sk_sleep, wait);
  	if (sk->sk_state == TCP_LISTEN)
-@@ -389,6 +394,21 @@ unsigned int tcp_poll(struct file *file, struct socket *sock, poll_table *wait)
- 	
+@@ -387,6 +392,21 @@ unsigned int tcp_poll(struct file *file, struct socket *sock, poll_table *wait)
+ 
  	mask = 0;
  
 +	check_send_space = 1;
@@ -84571,7 +84896,7 @@
  	/*
  	 * POLLHUP is certainly not done right. But poll() doesn't
  	 * have a notion of HUP in just one direction, and for a
-@@ -436,7 +456,7 @@ unsigned int tcp_poll(struct file *file, struct socket *sock, poll_table *wait)
+@@ -434,7 +454,7 @@ unsigned int tcp_poll(struct file *file, struct socket *sock, poll_table *wait)
  		if (tp->rcv_nxt - tp->copied_seq >= target)
  			mask |= POLLIN | POLLRDNORM;
  
@@ -84580,7 +84905,7 @@
  			if (sk_stream_wspace(sk) >= sk_stream_min_wspace(sk)) {
  				mask |= POLLOUT | POLLWRNORM;
  			} else {  /* send SIGIO later */
-@@ -684,7 +704,7 @@ struct sk_buff *sk_stream_alloc_skb(struct sock *sk, int size, gfp_t gfp)
+@@ -688,7 +708,7 @@ struct sk_buff *sk_stream_alloc_skb(struct sock *sk, int size, gfp_t gfp)
  
  	skb = alloc_skb_fclone(size + sk->sk_prot->max_header, gfp);
  	if (skb) {
@@ -84589,7 +84914,7 @@
  			/*
  			 * Make sure that we have exactly size bytes
  			 * available to the caller, no more, no less.
-@@ -770,15 +790,23 @@ static ssize_t do_tcp_sendpages(struct sock *sk, struct page **pages, int poffse
+@@ -774,15 +794,23 @@ static ssize_t do_tcp_sendpages(struct sock *sk, struct page **pages, int poffse
  		int copy, i, can_coalesce;
  		int offset = poffset % PAGE_SIZE;
  		int size = min_t(size_t, psize, PAGE_SIZE - offset);
@@ -84613,7 +84938,7 @@
  
  			skb_entail(sk, skb);
  			copy = size_goal;
-@@ -793,7 +821,7 @@ new_segment:
+@@ -797,7 +825,7 @@ new_segment:
  			tcp_mark_push(tp, skb);
  			goto new_segment;
  		}
@@ -84622,7 +84947,7 @@
  			goto wait_for_memory;
  
  		if (can_coalesce) {
-@@ -834,10 +862,15 @@ new_segment:
+@@ -838,10 +866,15 @@ new_segment:
  wait_for_sndbuf:
  		set_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
  wait_for_memory:
@@ -84639,7 +84964,7 @@
  			goto do_error;
  
  		mss_now = tcp_send_mss(sk, &size_goal, flags);
-@@ -873,12 +906,8 @@ ssize_t tcp_sendpage(struct socket *sock, struct page *page, int offset,
+@@ -877,12 +910,8 @@ ssize_t tcp_sendpage(struct socket *sock, struct page *page, int offset,
  	return res;
  }
  
@@ -84653,7 +84978,7 @@
  	int tmp = tp->mss_cache;
  
  	if (sk->sk_route_caps & NETIF_F_SG) {
-@@ -936,6 +965,7 @@ int tcp_sendmsg(struct kiocb *iocb, struct socket *sock, struct msghdr *msg,
+@@ -940,6 +969,7 @@ int tcp_sendmsg(struct kiocb *iocb, struct socket *sock, struct msghdr *msg,
  	while (--iovlen >= 0) {
  		size_t seglen = iov->iov_len;
  		unsigned char __user *from = iov->iov_base;
@@ -84661,7 +84986,7 @@
  
  		iov++;
  
-@@ -951,17 +981,27 @@ int tcp_sendmsg(struct kiocb *iocb, struct socket *sock, struct msghdr *msg,
+@@ -955,17 +985,27 @@ int tcp_sendmsg(struct kiocb *iocb, struct socket *sock, struct msghdr *msg,
  			}
  
  			if (copy <= 0) {
@@ -84690,7 +85015,7 @@
  
  				/*
  				 * Check whether we can use HW checksum.
-@@ -1008,6 +1048,7 @@ new_segment:
+@@ -1012,6 +1052,7 @@ new_segment:
  				} else if (page) {
  					if (off == PAGE_SIZE) {
  						put_page(page);
@@ -84698,7 +85023,7 @@
  						TCP_PAGE(sk) = page = NULL;
  						off = 0;
  					}
-@@ -1017,10 +1058,13 @@ new_segment:
+@@ -1021,10 +1062,13 @@ new_segment:
  				if (copy > PAGE_SIZE - off)
  					copy = PAGE_SIZE - off;
  
@@ -84713,7 +85038,7 @@
  					/* Allocate new cache page. */
  					if (!(page = sk_stream_alloc_page(sk)))
  						goto wait_for_memory;
-@@ -1052,7 +1096,8 @@ new_segment:
+@@ -1056,7 +1100,8 @@ new_segment:
  					} else if (off + copy < PAGE_SIZE) {
  						get_page(page);
  						TCP_PAGE(sk) = page;
@@ -84723,7 +85048,7 @@
  				}
  
  				TCP_OFF(sk) = off + copy;
-@@ -1083,10 +1128,15 @@ new_segment:
+@@ -1087,10 +1132,15 @@ new_segment:
  wait_for_sndbuf:
  			set_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
  wait_for_memory:
@@ -84740,7 +85065,7 @@
  				goto do_error;
  
  			mss_now = tcp_send_mss(sk, &size_goal, flags);
-@@ -1184,8 +1234,10 @@ void tcp_cleanup_rbuf(struct sock *sk, int copied)
+@@ -1188,8 +1238,10 @@ void tcp_cleanup_rbuf(struct sock *sk, int copied)
  	struct sk_buff *skb = skb_peek(&sk->sk_receive_queue);
  
  	WARN(skb && !before(tp->copied_seq, TCP_SKB_CB(skb)->end_seq),
@@ -84753,7 +85078,7 @@
  #endif
  
  	if (inet_csk_ack_scheduled(sk)) {
-@@ -1446,8 +1498,9 @@ int tcp_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
+@@ -1451,8 +1503,9 @@ int tcp_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
  				goto found_ok_skb;
  			if (tcp_hdr(skb)->fin)
  				goto found_fin_ok;
@@ -84764,7 +85089,7 @@
  					*seq, TCP_SKB_CB(skb)->seq,
  					tp->rcv_nxt, flags);
  		}
-@@ -1510,8 +1563,19 @@ int tcp_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
+@@ -1515,8 +1568,19 @@ int tcp_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
  
  			tp->ucopy.len = len;
  
@@ -84786,7 +85111,7 @@
  
  			/* Ugly... If prequeue is not empty, we have to
  			 * process it before releasing socket, otherwise
-@@ -1935,7 +1999,7 @@ adjudge_to_death:
+@@ -1940,7 +2004,7 @@ adjudge_to_death:
  	bh_lock_sock(sk);
  	WARN_ON(sock_owned_by_user(sk));
  
@@ -84795,32 +85120,22 @@
  
  	/* Have we already been destroyed by a softirq or backlog? */
  	if (state != TCP_CLOSE && sk->sk_state == TCP_CLOSE)
-@@ -1975,14 +2039,19 @@ adjudge_to_death:
- 		}
+@@ -1981,10 +2045,12 @@ adjudge_to_death:
  	}
  	if (sk->sk_state != TCP_CLOSE) {
--		int orphan_count = percpu_counter_read_positive(
--						sk->sk_prot->orphan_count);
-+		int orphans = ub_get_orphan_count(sk);
- 
  		sk_mem_reclaim(sk);
--		if (tcp_too_many_orphans(sk, orphan_count)) {
--			if (net_ratelimit())
-+		if (ub_too_many_orphans(sk, orphans)) {
-+			if (net_ratelimit()) {
-+				int ubid = 0;
-+#ifdef CONFIG_BEANCOUNTERS
-+				ubid = sock_has_ubc(sk) ?
-+				   top_beancounter(sock_bc(sk)->ub)->ub_uid : 0;
-+#endif
+-		if (tcp_too_many_orphans(sk, 0)) {
++		if (ub_too_many_orphans(sk, 0)) {
+ 			if (net_ratelimit())
  				printk(KERN_INFO "TCP: too many of orphaned "
 -				       "sockets\n");
-+				       "sockets (%d in CT%d)\n", orphans, ubid);
-+			}
++				       "sockets (%d in CT%d)\n",
++				       ub_get_orphan_count(sk),
++				       sock_has_ubc(sk) ? sock_bc(sk)->ub->ub_uid : -1);
  			tcp_set_state(sk, TCP_CLOSE);
  			tcp_send_active_reset(sk, GFP_ATOMIC);
  			NET_INC_STATS_BH(sock_net(sk),
-@@ -2059,6 +2128,7 @@ int tcp_disconnect(struct sock *sk, int flags)
+@@ -2061,6 +2127,7 @@ int tcp_disconnect(struct sock *sk, int flags)
  	tp->snd_ssthresh = TCP_INFINITE_SSTHRESH;
  	tp->snd_cwnd_cnt = 0;
  	tp->bytes_acked = 0;
@@ -84828,7 +85143,7 @@
  	tcp_set_ca_state(sk, TCP_CA_Open);
  	tcp_clear_retrans(tp);
  	inet_csk_delack_init(sk);
-@@ -2886,10 +2956,11 @@ void __init tcp_init(void)
+@@ -2888,10 +2955,11 @@ void __init tcp_init(void)
  
  	percpu_counter_init(&tcp_sockets_allocated, 0);
  	percpu_counter_init(&tcp_orphan_count, 0);
@@ -84841,7 +85156,7 @@
  
  	/* Size and allocate the main established and bind bucket
  	 * hash tables.
-@@ -2958,6 +3029,11 @@ void __init tcp_init(void)
+@@ -2950,6 +3018,11 @@ void __init tcp_init(void)
  	sysctl_tcp_mem[1] = limit;
  	sysctl_tcp_mem[2] = sysctl_tcp_mem[0] * 2;
  
@@ -84854,7 +85169,7 @@
  	limit = ((unsigned long)sysctl_tcp_mem[1]) << (PAGE_SHIFT - 7);
  	max_share = min(4UL*1024*1024, limit);
 diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
-index 2433bcd..0eb9c17 100644
+index ce1ce82..506d87f 100644
 --- a/net/ipv4/tcp_input.c
 +++ b/net/ipv4/tcp_input.c
 @@ -72,6 +72,8 @@
@@ -84893,7 +85208,7 @@
  	    atomic_read(&tcp_memory_allocated) < sysctl_tcp_mem[0]) {
  		sk->sk_rcvbuf = min(atomic_read(&sk->sk_rmem_alloc),
  				    sysctl_tcp_rmem[2]);
-@@ -4268,19 +4272,19 @@ static void tcp_ofo_queue(struct sock *sk)
+@@ -4270,19 +4274,19 @@ static void tcp_ofo_queue(struct sock *sk)
  static int tcp_prune_ofo_queue(struct sock *sk);
  static int tcp_prune_queue(struct sock *sk);
  
@@ -84917,7 +85232,7 @@
  				return -1;
  		}
  	}
-@@ -4332,8 +4336,8 @@ static void tcp_data_queue(struct sock *sk, struct sk_buff *skb)
+@@ -4334,8 +4338,8 @@ static void tcp_data_queue(struct sock *sk, struct sk_buff *skb)
  		if (eaten <= 0) {
  queue_and_out:
  			if (eaten < 0 &&
@@ -84928,7 +85243,7 @@
  
  			skb_set_owner_r(skb, sk);
  			__skb_queue_tail(&sk->sk_receive_queue, skb);
-@@ -4377,6 +4381,12 @@ out_of_window:
+@@ -4379,6 +4383,12 @@ out_of_window:
  drop:
  		__kfree_skb(skb);
  		return;
@@ -84941,7 +85256,7 @@
  	}
  
  	/* Out of window. F.e. zero window probe. */
-@@ -4403,7 +4413,7 @@ drop:
+@@ -4405,7 +4415,7 @@ drop:
  
  	TCP_ECN_check_ce(tp, skb);
  
@@ -84950,7 +85265,7 @@
  		goto drop;
  
  	/* Disable header prediction. */
-@@ -4589,6 +4599,10 @@ restart:
+@@ -4591,6 +4601,10 @@ restart:
  		nskb = alloc_skb(copy + header, GFP_ATOMIC);
  		if (!nskb)
  			return;
@@ -84961,7 +85276,7 @@
  
  		skb_set_mac_header(nskb, skb_mac_header(skb) - skb->head);
  		skb_set_network_header(nskb, (skb_network_header(skb) -
-@@ -4717,7 +4731,7 @@ static int tcp_prune_queue(struct sock *sk)
+@@ -4719,7 +4733,7 @@ static int tcp_prune_queue(struct sock *sk)
  
  	if (atomic_read(&sk->sk_rmem_alloc) >= sk->sk_rcvbuf)
  		tcp_clamp_window(sk);
@@ -84970,7 +85285,7 @@
  		tp->rcv_ssthresh = min(tp->rcv_ssthresh, 4U * tp->advmss);
  
  	tcp_collapse_ofo_queue(sk);
-@@ -4783,7 +4797,7 @@ static int tcp_should_expand_sndbuf(struct sock *sk)
+@@ -4785,7 +4799,7 @@ static int tcp_should_expand_sndbuf(struct sock *sk)
  		return 0;
  
  	/* If we are under global TCP memory pressure, do not expand.  */
@@ -84979,7 +85294,7 @@
  		return 0;
  
  	/* If we are under soft global TCP memory pressure, do not expand.  */
-@@ -5286,6 +5300,10 @@ int tcp_rcv_established(struct sock *sk, struct sk_buff *skb,
+@@ -5288,6 +5302,10 @@ int tcp_rcv_established(struct sock *sk, struct sk_buff *skb,
  
  				if ((int)skb->truesize > sk->sk_forward_alloc)
  					goto step5;
@@ -85450,7 +85765,7 @@
  	/* Reserve space for headers. */
  	skb_reserve(buff, MAX_TCP_HEADER);
 diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
-index cdb2ca7..78846e4 100644
+index 57d5501..896f0f4 100644
 --- a/net/ipv4/tcp_timer.c
 +++ b/net/ipv4/tcp_timer.c
 @@ -20,6 +20,8 @@
@@ -85462,38 +85777,16 @@
  
  int sysctl_tcp_syn_retries __read_mostly = TCP_SYN_RETRIES;
  int sysctl_tcp_synack_retries __read_mostly = TCP_SYNACK_RETRIES;
-@@ -65,7 +67,8 @@ static void tcp_write_err(struct sock *sk)
- static int tcp_out_of_resources(struct sock *sk, int do_reset)
- {
- 	struct tcp_sock *tp = tcp_sk(sk);
--	int orphans = percpu_counter_read_positive(&tcp_orphan_count);
-+	int orphans = ub_get_orphan_count(sk);
-+	int orph = orphans;
- 
- 	/* If peer does not open window for long time, or did not transmit
- 	 * anything for long time, penalize it. */
-@@ -76,10 +79,16 @@ static int tcp_out_of_resources(struct sock *sk, int do_reset)
+@@ -76,7 +78,7 @@ static int tcp_out_of_resources(struct sock *sk, int do_reset)
  	if (sk->sk_err_soft)
- 		orphans <<= 1;
+ 		shift++;
  
--	if (tcp_too_many_orphans(sk, orphans)) {
--		if (net_ratelimit())
--			printk(KERN_INFO "Out of socket memory\n");
--
-+	if (ub_too_many_orphans(sk, orphans)) {
-+		if (net_ratelimit()) {
-+			int ubid = 0;
-+#ifdef CONFIG_BEANCOUNTERS
-+			ubid = sock_has_ubc(sk) ?
-+				top_beancounter(sock_bc(sk)->ub)->ub_uid : 0;
-+#endif
-+			printk(KERN_INFO "Orphaned socket dropped "
-+			       "(%d,%d in CT%d)\n", orph, orphans, ubid);
-+		}
- 		/* Catch exceptional cases, when connection requires reset.
- 		 *      1. Last segment was sent recently. */
- 		if ((s32)(tcp_time_stamp - tp->lsndtime) <= TCP_TIMEWAIT_LEN ||
-@@ -177,6 +186,9 @@ static void tcp_delack_timer(unsigned long data)
+-	if (tcp_too_many_orphans(sk, shift)) {
++	if (ub_too_many_orphans(sk, shift)) {
+ 		if (net_ratelimit())
+ 			printk(KERN_INFO "Out of socket memory\n");
+ 
+@@ -177,6 +179,9 @@ static void tcp_delack_timer(unsigned long data)
  	struct sock *sk = (struct sock *)data;
  	struct tcp_sock *tp = tcp_sk(sk);
  	struct inet_connection_sock *icsk = inet_csk(sk);
@@ -85503,7 +85796,7 @@
  
  	bh_lock_sock(sk);
  	if (sock_owned_by_user(sk)) {
-@@ -231,6 +243,8 @@ out:
+@@ -231,6 +236,8 @@ out:
  out_unlock:
  	bh_unlock_sock(sk);
  	sock_put(sk);
@@ -85512,7 +85805,7 @@
  }
  
  static void tcp_probe_timer(struct sock *sk)
-@@ -238,10 +252,13 @@ static void tcp_probe_timer(struct sock *sk)
+@@ -238,10 +245,13 @@ static void tcp_probe_timer(struct sock *sk)
  	struct inet_connection_sock *icsk = inet_csk(sk);
  	struct tcp_sock *tp = tcp_sk(sk);
  	int max_probes;
@@ -85527,7 +85820,7 @@
  	}
  
  	/* *WARNING* RFC 1122 forbids this
-@@ -267,7 +284,7 @@ static void tcp_probe_timer(struct sock *sk)
+@@ -267,7 +277,7 @@ static void tcp_probe_timer(struct sock *sk)
  		max_probes = tcp_orphan_retries(sk, alive);
  
  		if (tcp_out_of_resources(sk, alive || icsk->icsk_probes_out <= max_probes))
@@ -85536,7 +85829,7 @@
  	}
  
  	if (icsk->icsk_probes_out > max_probes) {
-@@ -276,6 +293,9 @@ static void tcp_probe_timer(struct sock *sk)
+@@ -276,6 +286,9 @@ static void tcp_probe_timer(struct sock *sk)
  		/* Only send another probe if we didn't close things up. */
  		tcp_send_probe0(sk);
  	}
@@ -85546,7 +85839,7 @@
  }
  
  /*
-@@ -286,6 +306,9 @@ void tcp_retransmit_timer(struct sock *sk)
+@@ -286,6 +299,9 @@ void tcp_retransmit_timer(struct sock *sk)
  {
  	struct tcp_sock *tp = tcp_sk(sk);
  	struct inet_connection_sock *icsk = inet_csk(sk);
@@ -85556,7 +85849,7 @@
  
  	if (!tp->packets_out)
  		goto out;
-@@ -391,7 +414,8 @@ out_reset_timer:
+@@ -391,7 +407,8 @@ out_reset_timer:
  	if (retransmits_timed_out(sk, sysctl_tcp_retries1 + 1))
  		__sk_dst_reset(sk);
  
@@ -85566,7 +85859,7 @@
  }
  
  static void tcp_write_timer(unsigned long data)
-@@ -399,6 +423,9 @@ static void tcp_write_timer(unsigned long data)
+@@ -399,6 +416,9 @@ static void tcp_write_timer(unsigned long data)
  	struct sock *sk = (struct sock *)data;
  	struct inet_connection_sock *icsk = inet_csk(sk);
  	int event;
@@ -85576,7 +85869,7 @@
  
  	bh_lock_sock(sk);
  	if (sock_owned_by_user(sk)) {
-@@ -433,6 +460,8 @@ out:
+@@ -433,6 +453,8 @@ out:
  out_unlock:
  	bh_unlock_sock(sk);
  	sock_put(sk);
@@ -85585,7 +85878,7 @@
  }
  
  /*
-@@ -463,6 +492,9 @@ static void tcp_keepalive_timer (unsigned long data)
+@@ -463,6 +485,9 @@ static void tcp_keepalive_timer (unsigned long data)
  	struct inet_connection_sock *icsk = inet_csk(sk);
  	struct tcp_sock *tp = tcp_sk(sk);
  	__u32 elapsed;
@@ -85595,7 +85888,7 @@
  
  	/* Only process if socket is not in use. */
  	bh_lock_sock(sk);
-@@ -534,4 +566,5 @@ death:
+@@ -534,4 +559,5 @@ death:
  out:
  	bh_unlock_sock(sk);
  	sock_put(sk);
@@ -85894,7 +86187,7 @@
  	if (!fib6_node_kmem)
  		goto out;
 diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
-index cd48801..15e86e6 100644
+index eca3ef7..7cc246d 100644
 --- a/net/ipv6/ip6_output.c
 +++ b/net/ipv6/ip6_output.c
 @@ -522,6 +522,20 @@ int ip6_forward(struct sk_buff *skb)
@@ -89770,7 +90063,7 @@
  
  	if (xs_bind6(transport, sock) < 0) {
 diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
-index fc820cd..3c3c16d 100644
+index 065dc66..db6ef80 100644
 --- a/net/unix/af_unix.c
 +++ b/net/unix/af_unix.c
 @@ -115,6 +115,9 @@
@@ -89812,7 +90105,7 @@
  }
  
  static int unix_create(struct net *net, struct socket *sock, int protocol)
-@@ -1026,6 +1031,7 @@ static int unix_stream_connect(struct socket *sock, struct sockaddr *uaddr,
+@@ -1035,6 +1040,7 @@ static int unix_stream_connect(struct socket *sock, struct sockaddr *uaddr,
  	int st;
  	int err;
  	long timeo;
@@ -89820,7 +90113,7 @@
  
  	err = unix_mkname(sunaddr, addr_len, &hash);
  	if (err < 0)
-@@ -1054,6 +1060,10 @@ static int unix_stream_connect(struct socket *sock, struct sockaddr *uaddr,
+@@ -1063,6 +1069,10 @@ static int unix_stream_connect(struct socket *sock, struct sockaddr *uaddr,
  	skb = sock_wmalloc(newsk, 1, 0, GFP_KERNEL);
  	if (skb == NULL)
  		goto out;
@@ -89831,7 +90124,7 @@
  
  restart:
  	/*  Find listening sock. */
-@@ -1302,7 +1312,7 @@ static void unix_detach_fds(struct scm_cookie *scm, struct sk_buff *skb)
+@@ -1311,7 +1321,7 @@ static void unix_detach_fds(struct scm_cookie *scm, struct sk_buff *skb)
  		unix_notinflight(scm->fp->fp[i]);
  }
  
@@ -89840,7 +90133,7 @@
  {
  	struct scm_cookie scm;
  	memset(&scm, 0, sizeof(scm));
-@@ -1313,6 +1323,7 @@ static void unix_destruct_fds(struct sk_buff *skb)
+@@ -1322,6 +1332,7 @@ static void unix_destruct_fds(struct sk_buff *skb)
  	scm_destroy(&scm);
  	sock_wfree(skb);
  }
@@ -89848,7 +90141,7 @@
  
  static int unix_attach_fds(struct scm_cookie *scm, struct sk_buff *skb)
  {
-@@ -1538,6 +1549,16 @@ static int unix_stream_sendmsg(struct kiocb *kiocb, struct socket *sock,
+@@ -1547,6 +1558,16 @@ static int unix_stream_sendmsg(struct kiocb *kiocb, struct socket *sock,
  
  		size = len-sent;
  
@@ -89865,7 +90158,7 @@
  		/* Keep two messages in the pipe so it schedules better */
  		if (size > ((sk->sk_sndbuf >> 1) - 64))
  			size = (sk->sk_sndbuf >> 1) - 64;
-@@ -1549,8 +1570,9 @@ static int unix_stream_sendmsg(struct kiocb *kiocb, struct socket *sock,
+@@ -1558,8 +1579,9 @@ static int unix_stream_sendmsg(struct kiocb *kiocb, struct socket *sock,
  		 *	Grab a buffer
  		 */
  
@@ -89877,7 +90170,7 @@
  
  		if (skb == NULL)
  			goto out_err;
-@@ -1989,6 +2011,7 @@ static unsigned int unix_poll(struct file *file, struct socket *sock, poll_table
+@@ -1998,6 +2020,7 @@ static unsigned int unix_poll(struct file *file, struct socket *sock, poll_table
  {
  	struct sock *sk = sock->sk;
  	unsigned int mask;
@@ -89885,7 +90178,7 @@
  
  	sock_poll_wait(file, sk->sk_sleep, wait);
  	mask = 0;
-@@ -2001,6 +2024,10 @@ static unsigned int unix_poll(struct file *file, struct socket *sock, poll_table
+@@ -2010,6 +2033,10 @@ static unsigned int unix_poll(struct file *file, struct socket *sock, poll_table
  	if (sk->sk_shutdown & RCV_SHUTDOWN)
  		mask |= POLLRDHUP;
  
@@ -89896,7 +90189,7 @@
  	/* readable? */
  	if (!skb_queue_empty(&sk->sk_receive_queue) ||
  	    (sk->sk_shutdown & RCV_SHUTDOWN))
-@@ -2015,7 +2042,7 @@ static unsigned int unix_poll(struct file *file, struct socket *sock, poll_table
+@@ -2024,7 +2051,7 @@ static unsigned int unix_poll(struct file *file, struct socket *sock, poll_table
  	 * we set writable also when the other side has shut down the
  	 * connection. This prevents stuck sockets.
  	 */

Copied and modified: dists/sid/linux-2.6/debian/patches/series/28-extra (from r16524, dists/sid/linux-2.6/debian/patches/series/27-extra)
==============================================================================
--- dists/sid/linux-2.6/debian/patches/series/27-extra	Tue Nov  2 10:45:13 2010	(r16524, copy source)
+++ dists/sid/linux-2.6/debian/patches/series/28-extra	Tue Nov  2 16:36:34 2010	(r16525)
@@ -1,7 +1,4 @@
-+ debian/revert-tcp-Combat-per-cpu-skew-in-orphan-tests.patch featureset=openvz
 + features/all/openvz/openvz.patch featureset=openvz
-+ features/all/openvz/cfq-iosched-do-not-force-idling-for-sync-workload.patch featureset=openvz
-+ features/all/openvz/openvz-printk-handle-global-log-buffer-realloc.patch featureset=openvz
 
 + debian/revert-sched-2.6.32.25-changes.patch featureset=vserver
 + debian/revert-sched-2.6.32.22-changes.patch featureset=vserver



More information about the Kernel-svn-changes mailing list