[linux] 04/07: perf: Fix event->ctx locking (CVE-2016-6786, CVE-2016-6787)
debian-kernel at lists.debian.org
debian-kernel at lists.debian.org
Tue Feb 21 21:40:58 UTC 2017
This is an automated email from the git hooks/post-receive script.
benh pushed a commit to branch wheezy-security
in repository linux.
commit 1f1470a82c5c4239abe42300f6dc79bb66ce5860
Author: Ben Hutchings <ben at decadent.org.uk>
Date: Tue Feb 21 20:20:31 2017 +0000
perf: Fix event->ctx locking (CVE-2016-6786, CVE-2016-6787)
...plus dependencies
---
debian/changelog | 3 +
...lence-warning-if-config_lockdep-isn-t-set.patch | 43 ++
.../bugfix/all/perf-fix-event-ctx-locking.patch | 468 +++++++++++++++++++++
...rf-fix-perf_event_for_each-to-use-sibling.patch | 38 ++
debian/patches/series | 3 +
5 files changed, 555 insertions(+)
diff --git a/debian/changelog b/debian/changelog
index 79f8483..0c796b3 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -8,6 +8,9 @@ linux (3.2.84-2) UNRELEASED; urgency=high
* [arm*] dma-mapping: don't allow DMA mappings to be marked executable
(CVE-2014-9888)
* media: info leak in __media_device_enum_links() (CVE-2014-9895)
+ * perf: Fix perf_event_for_each() to use sibling
+ * lockdep: Silence warning if CONFIG_LOCKDEP isn't set
+ * perf: Fix event->ctx locking (CVE-2016-6786, CVE-2016-6787)
-- Salvatore Bonaccorso <carnil at debian.org> Sat, 18 Feb 2017 18:26:58 +0100
diff --git a/debian/patches/bugfix/all/lockdep-silence-warning-if-config_lockdep-isn-t-set.patch b/debian/patches/bugfix/all/lockdep-silence-warning-if-config_lockdep-isn-t-set.patch
new file mode 100644
index 0000000..4b6e218
--- /dev/null
+++ b/debian/patches/bugfix/all/lockdep-silence-warning-if-config_lockdep-isn-t-set.patch
@@ -0,0 +1,43 @@
+From: Paul Bolle <pebolle at tiscali.nl>
+Date: Thu, 24 Jan 2013 21:53:17 +0100
+Subject: lockdep: Silence warning if CONFIG_LOCKDEP isn't set
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+Origin: https://git.kernel.org/linus/5cd3f5affad2109fd1458aab3f6216f2181e26ea
+
+Since commit c9a4962881929df7f1ef6e63e1b9da304faca4dd ("nfsd:
+make client_lock per net") compiling nfs4state.o without
+CONFIG_LOCKDEP set, triggers this GCC warning:
+
+ fs/nfsd/nfs4state.c: In function ‘free_client’:
+ fs/nfsd/nfs4state.c:1051:19: warning: unused variable ‘nn’ [-Wunused-variable]
+
+The cause of that warning is that lockdep_assert_held() compiles
+away if CONFIG_LOCKDEP is not set. Silence this warning by using
+the argument to lockdep_assert_held() as a nop if CONFIG_LOCKDEP
+is not set.
+
+Signed-off-by: Paul Bolle <pebolle at tiscali.nl>
+Cc: Peter Zijlstra <peterz at infradead.org>
+Cc: Stanislav Kinsbursky <skinsbursky at parallels.com>
+Cc: J. Bruce Fields <bfields at redhat.com>
+Link: http://lkml.kernel.org/r/1359060797.1325.33.camel@x61.thuisdomein
+Signed-off-by: Ingo Molnar <mingo at kernel.org>
+[bwh: Backported to 3.2: adjust context]
+Signed-off-by: Ben Hutchings <ben at decadent.org.uk>
+---
+ include/linux/lockdep.h | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/include/linux/lockdep.h
++++ b/include/linux/lockdep.h
+@@ -394,7 +394,7 @@ struct lock_class_key { };
+
+ #define lockdep_depth(tsk) (0)
+
+-#define lockdep_assert_held(l) do { } while (0)
++#define lockdep_assert_held(l) do { (void)(l); } while (0)
+ #define lockdep_assert_held_once(l) do { (void)(l); } while (0)
+
+ #endif /* !LOCKDEP */
diff --git a/debian/patches/bugfix/all/perf-fix-event-ctx-locking.patch b/debian/patches/bugfix/all/perf-fix-event-ctx-locking.patch
new file mode 100644
index 0000000..af9cbcf
--- /dev/null
+++ b/debian/patches/bugfix/all/perf-fix-event-ctx-locking.patch
@@ -0,0 +1,468 @@
+From: Peter Zijlstra <peterz at infradead.org>
+Date: Fri, 23 Jan 2015 12:24:14 +0100
+Subject: perf: Fix event->ctx locking
+Origin: https://git.kernel.org/linus/f63a8daa5812afef4f06c962351687e1ff9ccb2b
+Bug-Debian-Security: https://security-tracker.debian.org/tracker/CVE-2016-6786
+Bug-Debian-Security: https://security-tracker.debian.org/tracker/CVE-2016-6787
+
+There have been a few reported issues wrt. the lack of locking around
+changing event->ctx. This patch tries to address those.
+
+It avoids the whole rwsem thing; and while it appears to work, please
+give it some thought in review.
+
+What I did fail at is sensible runtime checks on the use of
+event->ctx, the RCU use makes it very hard.
+
+Signed-off-by: Peter Zijlstra (Intel) <peterz at infradead.org>
+Cc: Paul E. McKenney <paulmck at linux.vnet.ibm.com>
+Cc: Jiri Olsa <jolsa at redhat.com>
+Cc: Arnaldo Carvalho de Melo <acme at kernel.org>
+Cc: Linus Torvalds <torvalds at linux-foundation.org>
+Link: http://lkml.kernel.org/r/20150123125834.209535886@infradead.org
+Signed-off-by: Ingo Molnar <mingo at kernel.org>
+[bwh: Backported to 3.2:
+ - We don't have perf_pmu_migrate_context()
+ - Adjust context]
+Signed-off-by: Ben Hutchings <ben at decadent.org.uk>
+---
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -670,6 +670,76 @@ static void put_ctx(struct perf_event_co
+ }
+ }
+
++/*
++ * Because of perf_event::ctx migration in sys_perf_event_open::move_group we
++ * need some magic.
++ *
++ * Those places that change perf_event::ctx will hold both
++ * perf_event_ctx::mutex of the 'old' and 'new' ctx value.
++ *
++ * Lock ordering is by mutex address. There is one other site where
++ * perf_event_context::mutex nests and that is put_event(). But remember that
++ * that is a parent<->child context relation, and migration does not affect
++ * children, therefore these two orderings should not interact.
++ *
++ * The change in perf_event::ctx does not affect children (as claimed above)
++ * because the sys_perf_event_open() case will install a new event and break
++ * the ctx parent<->child relation.
++ *
++ * The places that change perf_event::ctx will issue:
++ *
++ * perf_remove_from_context();
++ * synchronize_rcu();
++ * perf_install_in_context();
++ *
++ * to affect the change. The remove_from_context() + synchronize_rcu() should
++ * quiesce the event, after which we can install it in the new location. This
++ * means that only external vectors (perf_fops, prctl) can perturb the event
++ * while in transit. Therefore all such accessors should also acquire
++ * perf_event_context::mutex to serialize against this.
++ *
++ * However; because event->ctx can change while we're waiting to acquire
++ * ctx->mutex we must be careful and use the below perf_event_ctx_lock()
++ * function.
++ *
++ * Lock order:
++ * task_struct::perf_event_mutex
++ * perf_event_context::mutex
++ * perf_event_context::lock
++ * perf_event::child_mutex;
++ * perf_event::mmap_mutex
++ * mmap_sem
++ */
++static struct perf_event_context *perf_event_ctx_lock(struct perf_event *event)
++{
++ struct perf_event_context *ctx;
++
++again:
++ rcu_read_lock();
++ ctx = ACCESS_ONCE(event->ctx);
++ if (!atomic_inc_not_zero(&ctx->refcount)) {
++ rcu_read_unlock();
++ goto again;
++ }
++ rcu_read_unlock();
++
++ mutex_lock(&ctx->mutex);
++ if (event->ctx != ctx) {
++ mutex_unlock(&ctx->mutex);
++ put_ctx(ctx);
++ goto again;
++ }
++
++ return ctx;
++}
++
++static void perf_event_ctx_unlock(struct perf_event *event,
++ struct perf_event_context *ctx)
++{
++ mutex_unlock(&ctx->mutex);
++ put_ctx(ctx);
++}
++
+ static void unclone_ctx(struct perf_event_context *ctx)
+ {
+ if (ctx->parent_ctx) {
+@@ -1330,7 +1400,7 @@ static int __perf_event_disable(void *in
+ * is the current context on this CPU and preemption is disabled,
+ * hence we can't get into perf_event_task_sched_out for this context.
+ */
+-void perf_event_disable(struct perf_event *event)
++static void _perf_event_disable(struct perf_event *event)
+ {
+ struct perf_event_context *ctx = event->ctx;
+ struct task_struct *task = ctx->task;
+@@ -1372,6 +1442,19 @@ retry:
+ raw_spin_unlock_irq(&ctx->lock);
+ }
+
++/*
++ * Strictly speaking kernel users cannot create groups and therefore this
++ * interface does not need the perf_event_ctx_lock() magic.
++ */
++void perf_event_disable(struct perf_event *event)
++{
++ struct perf_event_context *ctx;
++
++ ctx = perf_event_ctx_lock(event);
++ _perf_event_disable(event);
++ perf_event_ctx_unlock(event, ctx);
++}
++
+ static void perf_set_shadow_time(struct perf_event *event,
+ struct perf_event_context *ctx,
+ u64 tstamp)
+@@ -1818,7 +1901,7 @@ unlock:
+ * perf_event_for_each_child or perf_event_for_each as described
+ * for perf_event_disable.
+ */
+-void perf_event_enable(struct perf_event *event)
++static void _perf_event_enable(struct perf_event *event)
+ {
+ struct perf_event_context *ctx = event->ctx;
+ struct task_struct *task = ctx->task;
+@@ -1875,7 +1958,19 @@ out:
+ raw_spin_unlock_irq(&ctx->lock);
+ }
+
+-int perf_event_refresh(struct perf_event *event, int refresh)
++/*
++ * See perf_event_disable();
++ */
++void perf_event_enable(struct perf_event *event)
++{
++ struct perf_event_context *ctx;
++
++ ctx = perf_event_ctx_lock(event);
++ _perf_event_enable(event);
++ perf_event_ctx_unlock(event, ctx);
++}
++
++static int _perf_event_refresh(struct perf_event *event, int refresh)
+ {
+ /*
+ * not supported on inherited events
+@@ -1884,10 +1979,25 @@ int perf_event_refresh(struct perf_event
+ return -EINVAL;
+
+ atomic_add(refresh, &event->event_limit);
+- perf_event_enable(event);
++ _perf_event_enable(event);
+
+ return 0;
+ }
++
++/*
++ * See perf_event_disable()
++ */
++int perf_event_refresh(struct perf_event *event, int refresh)
++{
++ struct perf_event_context *ctx;
++ int ret;
++
++ ctx = perf_event_ctx_lock(event);
++ ret = _perf_event_refresh(event, refresh);
++ perf_event_ctx_unlock(event, ctx);
++
++ return ret;
++}
+ EXPORT_SYMBOL_GPL(perf_event_refresh);
+
+ static void ctx_sched_out(struct perf_event_context *ctx,
+@@ -3115,7 +3225,16 @@ static void put_event(struct perf_event
+ rcu_read_unlock();
+
+ if (owner) {
+- mutex_lock(&owner->perf_event_mutex);
++ /*
++ * If we're here through perf_event_exit_task() we're already
++ * holding ctx->mutex which would be an inversion wrt. the
++ * normal lock order.
++ *
++ * However we can safely take this lock because its the child
++ * ctx->mutex.
++ */
++ mutex_lock_nested(&owner->perf_event_mutex, SINGLE_DEPTH_NESTING);
++
+ /*
+ * We have to re-check the event->owner field, if it is cleared
+ * we raced with perf_event_exit_task(), acquiring the mutex
+@@ -3167,12 +3286,13 @@ static int perf_event_read_group(struct
+ u64 read_format, char __user *buf)
+ {
+ struct perf_event *leader = event->group_leader, *sub;
+- int n = 0, size = 0, ret = -EFAULT;
+ struct perf_event_context *ctx = leader->ctx;
+- u64 values[5];
++ int n = 0, size = 0, ret;
+ u64 count, enabled, running;
++ u64 values[5];
++
++ lockdep_assert_held(&ctx->mutex);
+
+- mutex_lock(&ctx->mutex);
+ count = perf_event_read_value(leader, &enabled, &running);
+
+ values[n++] = 1 + leader->nr_siblings;
+@@ -3187,7 +3307,7 @@ static int perf_event_read_group(struct
+ size = n * sizeof(u64);
+
+ if (copy_to_user(buf, values, size))
+- goto unlock;
++ return -EFAULT;
+
+ ret = size;
+
+@@ -3201,14 +3321,11 @@ static int perf_event_read_group(struct
+ size = n * sizeof(u64);
+
+ if (copy_to_user(buf + ret, values, size)) {
+- ret = -EFAULT;
+- goto unlock;
++ return -EFAULT;
+ }
+
+ ret += size;
+ }
+-unlock:
+- mutex_unlock(&ctx->mutex);
+
+ return ret;
+ }
+@@ -3267,8 +3384,14 @@ static ssize_t
+ perf_read(struct file *file, char __user *buf, size_t count, loff_t *ppos)
+ {
+ struct perf_event *event = file->private_data;
++ struct perf_event_context *ctx;
++ int ret;
+
+- return perf_read_hw(event, buf, count);
++ ctx = perf_event_ctx_lock(event);
++ ret = perf_read_hw(event, buf, count);
++ perf_event_ctx_unlock(event, ctx);
++
++ return ret;
+ }
+
+ static unsigned int perf_poll(struct file *file, poll_table *wait)
+@@ -3292,7 +3415,7 @@ static unsigned int perf_poll(struct fil
+ return events;
+ }
+
+-static void perf_event_reset(struct perf_event *event)
++static void _perf_event_reset(struct perf_event *event)
+ {
+ (void)perf_event_read(event);
+ local64_set(&event->count, 0);
+@@ -3311,6 +3434,7 @@ static void perf_event_for_each_child(st
+ struct perf_event *child;
+
+ WARN_ON_ONCE(event->ctx->parent_ctx);
++
+ mutex_lock(&event->child_mutex);
+ func(event);
+ list_for_each_entry(child, &event->child_list, child_list)
+@@ -3324,15 +3448,14 @@ static void perf_event_for_each(struct p
+ struct perf_event_context *ctx = event->ctx;
+ struct perf_event *sibling;
+
+- WARN_ON_ONCE(ctx->parent_ctx);
+- mutex_lock(&ctx->mutex);
++ lockdep_assert_held(&ctx->mutex);
++
+ event = event->group_leader;
+
+ perf_event_for_each_child(event, func);
+ func(event);
+ list_for_each_entry(sibling, &event->sibling_list, group_entry)
+ perf_event_for_each_child(sibling, func);
+- mutex_unlock(&ctx->mutex);
+ }
+
+ static int perf_event_period(struct perf_event *event, u64 __user *arg)
+@@ -3391,25 +3514,24 @@ static int perf_event_set_output(struct
+ struct perf_event *output_event);
+ static int perf_event_set_filter(struct perf_event *event, void __user *arg);
+
+-static long perf_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
++static long _perf_ioctl(struct perf_event *event, unsigned int cmd, unsigned long arg)
+ {
+- struct perf_event *event = file->private_data;
+ void (*func)(struct perf_event *);
+ u32 flags = arg;
+
+ switch (cmd) {
+ case PERF_EVENT_IOC_ENABLE:
+- func = perf_event_enable;
++ func = _perf_event_enable;
+ break;
+ case PERF_EVENT_IOC_DISABLE:
+- func = perf_event_disable;
++ func = _perf_event_disable;
+ break;
+ case PERF_EVENT_IOC_RESET:
+- func = perf_event_reset;
++ func = _perf_event_reset;
+ break;
+
+ case PERF_EVENT_IOC_REFRESH:
+- return perf_event_refresh(event, arg);
++ return _perf_event_refresh(event, arg);
+
+ case PERF_EVENT_IOC_PERIOD:
+ return perf_event_period(event, (u64 __user *)arg);
+@@ -3450,6 +3572,19 @@ static long perf_ioctl(struct file *file
+ return 0;
+ }
+
++static long perf_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
++{
++ struct perf_event *event = file->private_data;
++ struct perf_event_context *ctx;
++ long ret;
++
++ ctx = perf_event_ctx_lock(event);
++ ret = _perf_ioctl(event, cmd, arg);
++ perf_event_ctx_unlock(event, ctx);
++
++ return ret;
++}
++
+ #ifdef CONFIG_COMPAT
+ static long perf_compat_ioctl(struct file *file, unsigned int cmd,
+ unsigned long arg)
+@@ -3471,11 +3606,15 @@ static long perf_compat_ioctl(struct fil
+
+ int perf_event_task_enable(void)
+ {
++ struct perf_event_context *ctx;
+ struct perf_event *event;
+
+ mutex_lock(¤t->perf_event_mutex);
+- list_for_each_entry(event, ¤t->perf_event_list, owner_entry)
+- perf_event_for_each_child(event, perf_event_enable);
++ list_for_each_entry(event, ¤t->perf_event_list, owner_entry) {
++ ctx = perf_event_ctx_lock(event);
++ perf_event_for_each_child(event, _perf_event_enable);
++ perf_event_ctx_unlock(event, ctx);
++ }
+ mutex_unlock(¤t->perf_event_mutex);
+
+ return 0;
+@@ -3483,11 +3622,15 @@ int perf_event_task_enable(void)
+
+ int perf_event_task_disable(void)
+ {
++ struct perf_event_context *ctx;
+ struct perf_event *event;
+
+ mutex_lock(¤t->perf_event_mutex);
+- list_for_each_entry(event, ¤t->perf_event_list, owner_entry)
+- perf_event_for_each_child(event, perf_event_disable);
++ list_for_each_entry(event, ¤t->perf_event_list, owner_entry) {
++ ctx = perf_event_ctx_lock(event);
++ perf_event_for_each_child(event, _perf_event_disable);
++ perf_event_ctx_unlock(event, ctx);
++ }
+ mutex_unlock(¤t->perf_event_mutex);
+
+ return 0;
+@@ -6327,6 +6470,15 @@ out:
+ return ret;
+ }
+
++static void mutex_lock_double(struct mutex *a, struct mutex *b)
++{
++ if (b < a)
++ swap(a, b);
++
++ mutex_lock(a);
++ mutex_lock_nested(b, SINGLE_DEPTH_NESTING);
++}
++
+ /**
+ * sys_perf_event_open - open a performance event, associate it to a task/cpu
+ *
+@@ -6342,7 +6494,7 @@ SYSCALL_DEFINE5(perf_event_open,
+ struct perf_event *group_leader = NULL, *output_event = NULL;
+ struct perf_event *event, *sibling;
+ struct perf_event_attr attr;
+- struct perf_event_context *ctx;
++ struct perf_event_context *ctx, *uninitialized_var(gctx);
+ struct file *event_file = NULL;
+ struct file *group_file = NULL;
+ struct task_struct *task = NULL;
+@@ -6517,9 +6669,14 @@ SYSCALL_DEFINE5(perf_event_open,
+ }
+
+ if (move_group) {
+- struct perf_event_context *gctx = group_leader->ctx;
++ gctx = group_leader->ctx;
++
++ /*
++ * See perf_event_ctx_lock() for comments on the details
++ * of swizzling perf_event::ctx.
++ */
++ mutex_lock_double(&gctx->mutex, &ctx->mutex);
+
+- mutex_lock(&gctx->mutex);
+ perf_remove_from_context(group_leader, false);
+
+ /*
+@@ -6534,14 +6691,19 @@ SYSCALL_DEFINE5(perf_event_open,
+ perf_event__state_init(sibling);
+ put_ctx(gctx);
+ }
+- mutex_unlock(&gctx->mutex);
+- put_ctx(gctx);
++ } else {
++ mutex_lock(&ctx->mutex);
+ }
+
+ WARN_ON_ONCE(ctx->parent_ctx);
+- mutex_lock(&ctx->mutex);
+
+ if (move_group) {
++ /*
++ * Wait for everybody to stop referencing the events through
++ * the old lists, before installing it on new lists.
++ */
++ synchronize_rcu();
++
+ perf_install_in_context(ctx, group_leader, cpu);
+ get_ctx(ctx);
+ list_for_each_entry(sibling, &group_leader->sibling_list,
+@@ -6554,6 +6716,11 @@ SYSCALL_DEFINE5(perf_event_open,
+ perf_install_in_context(ctx, event, cpu);
+ ++ctx->generation;
+ perf_unpin_context(ctx);
++
++ if (move_group) {
++ mutex_unlock(&gctx->mutex);
++ put_ctx(gctx);
++ }
+ mutex_unlock(&ctx->mutex);
+
+ event->owner = current;
diff --git a/debian/patches/bugfix/all/perf-fix-perf_event_for_each-to-use-sibling.patch b/debian/patches/bugfix/all/perf-fix-perf_event_for_each-to-use-sibling.patch
new file mode 100644
index 0000000..e407526
--- /dev/null
+++ b/debian/patches/bugfix/all/perf-fix-perf_event_for_each-to-use-sibling.patch
@@ -0,0 +1,38 @@
+From: Michael Ellerman <michael at ellerman.id.au>
+Date: Wed, 11 Apr 2012 11:54:13 +1000
+Subject: perf: Fix perf_event_for_each() to use sibling
+Origin: https://git.kernel.org/linus/724b6daa13e100067c30cfc4d1ad06629609dc4e
+
+In perf_event_for_each() we call a function on an event, and then
+iterate over the siblings of the event.
+
+However we don't call the function on the siblings, we call it
+repeatedly on the original event - it seems "obvious" that we should
+be calling it with sibling as the argument.
+
+It looks like this broke in commit 75f937f24bd9 ("Fix ctx->mutex
+vs counter->mutex inversion").
+
+The only effect of the bug is that the PERF_IOC_FLAG_GROUP parameter
+to the ioctls doesn't work.
+
+Signed-off-by: Michael Ellerman <michael at ellerman.id.au>
+Signed-off-by: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Link: http://lkml.kernel.org/r/1334109253-31329-1-git-send-email-michael@ellerman.id.au
+Signed-off-by: Ingo Molnar <mingo at kernel.org>
+Signed-off-by: Ben Hutchings <ben at decadent.org.uk>
+---
+ kernel/events/core.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -3331,7 +3331,7 @@ static void perf_event_for_each(struct p
+ perf_event_for_each_child(event, func);
+ func(event);
+ list_for_each_entry(sibling, &event->sibling_list, group_entry)
+- perf_event_for_each_child(event, func);
++ perf_event_for_each_child(sibling, func);
+ mutex_unlock(&ctx->mutex);
+ }
+
diff --git a/debian/patches/series b/debian/patches/series
index 87426fe..c507db7 100644
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -1129,6 +1129,9 @@ bugfix/all/dccp-fix-freeing-skb-too-early-for-IPV6_RECVPKTINFO.patch
bugfix/all/sctp-avoid-BUG_ON-on-sctp_wait_for_sndbuf.patch
bugfix/arm/arm-dma-mapping-don-t-allow-dma-mappings-to-be-marked-executable.patch
bugfix/all/media-info-leak-in-__media_device_enum_links.patch
+bugfix/all/perf-fix-perf_event_for_each-to-use-sibling.patch
+bugfix/all/lockdep-silence-warning-if-config_lockdep-isn-t-set.patch
+bugfix/all/perf-fix-event-ctx-locking.patch
# ABI maintenance
debian/perf-hide-abi-change-in-3.2.30.patch
--
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/kernel/linux.git
More information about the Kernel-svn-changes
mailing list