[kernel] r17887 - in dists/squeeze/linux-2.6/debian: . patches/bugfix/all patches/series

Maximilian Attems maks at alioth.debian.org
Sat Aug 6 11:24:49 UTC 2011


Author: maks
Date: Sat Aug  6 11:24:48 2011
New Revision: 17887

Log:
add 2.6.32.42+drm33.19

Added:
   dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm-i915-Add-a-no-lvds-quirk-for-the-Asus-EeeBox-PC-.patch
   dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm-i915-Implement-fair-lru-eviction-across-both-rin.patch
   dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm-i915-Maintain-LRU-order-of-inactive-objects-upon.patch
   dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm-i915-Move-the-eviction-logic-to-its-own-file.patch
   dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm-i915-Periodically-flush-the-active-lists-and-req.patch
   dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm-i915-evict-Ensure-we-completely-cleanup-on-failu.patch
   dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm-i915-prepare-for-fair-lru-eviction.patch
   dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm-implement-helper-functions-for-scanning-lru-list.patch
   dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm-radeon-kms-fix-for-radeon-on-systems-4GB-without.patch
   dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm_mm-extract-check_free_mm_node.patch
Modified:
   dists/squeeze/linux-2.6/debian/changelog
   dists/squeeze/linux-2.6/debian/patches/series/36

Modified: dists/squeeze/linux-2.6/debian/changelog
==============================================================================
--- dists/squeeze/linux-2.6/debian/changelog	Fri Aug  5 17:56:38 2011	(r17886)
+++ dists/squeeze/linux-2.6/debian/changelog	Sat Aug  6 11:24:48 2011	(r17887)
@@ -4,6 +4,12 @@
   * Add drm change from 2.6.32.41+drm33.18:
     - drm/radeon/kms: fix bad shift in atom iio table parser
   * [opvenz] ptrace: Don't allow to trace a process without memory map.
+  * Add drm change from 2.6.32.42+drm33.19, including:
+    - drm/i915: Implement fair lru eviction across both rings. (v2)
+    - drm/i915: Maintain LRU order of inactive objects upon access by CPU (v2)
+    - drm/i915/evict: Ensure we completely cleanup on failure
+    - drm/i915: Add a no lvds quirk for the Asus EeeBox PC EB1007
+    - drm/radeon/kms: fix for radeon on systems >4GB without hardware iommu
 
   [ Ben Hutchings ]
   * Add longterm release 2.6.32.42, including:

Added: dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm-i915-Add-a-no-lvds-quirk-for-the-Asus-EeeBox-PC-.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm-i915-Add-a-no-lvds-quirk-for-the-Asus-EeeBox-PC-.patch	Sat Aug  6 11:24:48 2011	(r17887)
@@ -0,0 +1,43 @@
+From: Hans de Goede <hdegoede at redhat.com>
+Date: Sat, 4 Jun 2011 15:39:21 +0200
+Subject: [PATCH 09/10] drm/i915: Add a no lvds quirk for the Asus EeeBox PC EB1007
+
+commit b0088882c63a9bdec8f3671438928ac7ab4bbcd8 upstream.
+
+commit 6a574b5b9b186e28abd3e571dfd1700c5220b510 upstream.
+
+I found this while figuring out why gnome-shell would not run on my
+Asus EeeBox PC EB1007. As a standalone "pc" this device cleary does not have
+an internal panel, yet it claims it does. Add a quirk to fix this.
+
+Signed-off-by: Hans de Goede <hdegoede at redhat.com>
+Reviewed-by: Keith Packard <keithp at keithp.com>
+Signed-off-by: Keith Packard <keithp at keithp.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh at suse.de>
+Signed-off-by: Stefan Bader <stefan.bader at canonical.com>
+---
+ drivers/gpu/drm/i915/intel_lvds.c |    8 ++++++++
+ 1 files changed, 8 insertions(+), 0 deletions(-)
+
+diff --git a/drivers/gpu/drm/i915/intel_lvds.c b/drivers/gpu/drm/i915/intel_lvds.c
+index d34c09f..7cfc814 100644
+--- a/drivers/gpu/drm/i915/intel_lvds.c
++++ b/drivers/gpu/drm/i915/intel_lvds.c
+@@ -865,6 +865,14 @@ static const struct dmi_system_id intel_no_lvds[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "U800"),
+ 		},
+ 	},
++	{
++		.callback = intel_no_lvds_dmi_callback,
++		.ident = "Asus EeeBox PC EB1007",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK Computer INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "EB1007"),
++		},
++	},
+ 
+ 	{ }	/* terminating entry */
+ };
+-- 
+1.7.2.5
+

Added: dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm-i915-Implement-fair-lru-eviction-across-both-rin.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm-i915-Implement-fair-lru-eviction-across-both-rin.patch	Sat Aug  6 11:24:48 2011	(r17887)
@@ -0,0 +1,337 @@
+From: Chris Wilson <chris at chris-wilson.co.uk>
+Date: Fri, 17 Jun 2011 10:04:21 -0500
+Subject: [PATCH 05/10] drm/i915: Implement fair lru eviction across both rings. (v2)
+
+commit f42384c96e7e53c42615b16396c47edf40667b72 upstream.
+
+BugLink: http://bugs.launchpad.net/bugs/599017
+
+Based in a large part upon Daniel Vetter's implementation and adapted
+for handling multiple rings in a single pass.
+
+This should lead to better gtt usage and fixes the page-fault-of-doom
+triggered. The fairness is provided by scanning through the GTT space
+amalgamating space in rendering order. As soon as we have a contiguous
+space in the GTT large enough for the new object (and its alignment),
+evict any object which lies within that space. This should keep more
+objects resident in the GTT.
+
+Doing throughput testing on a PineView machine with cairo-perf-trace
+indicates that there is very little difference with the new LRU scan,
+perhaps a small improvement... Except oddly for the poppler trace.
+
+Reference:
+
+  Bug 15911 - Intermittent X crash (freeze)
+  https://bugzilla.kernel.org/show_bug.cgi?id=15911
+
+  Bug 20152 - cannot view JPG in firefox when running UXA
+  https://bugs.freedesktop.org/show_bug.cgi?id=20152
+
+  Bug 24369 - Hang when scrolling firefox page with window in front
+  https://bugs.freedesktop.org/show_bug.cgi?id=24369
+
+  Bug 28478 - Intermittent graphics lockups due to overflow/loop
+  https://bugs.freedesktop.org/show_bug.cgi?id=28478
+
+v2: Attempt to clarify the logic and order of eviction through the use
+of comments and macros.
+
+Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
+Reviewed-by: Daniel Vetter <daniel at ffwll.ch>
+Signed-off-by: Eric Anholt <eric at anholt.net>
+(backported from commit cd377ea93f34cbd6ec49c868b66a5a7ab184775c upstream)
+
+Signed-off-by: Seth Forshee <seth.forshee at canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader at canonical.com>
+---
+ drivers/gpu/drm/i915/i915_drv.h       |    2 +
+ drivers/gpu/drm/i915/i915_gem_evict.c |  240 +++++++++++++++++----------------
+ 2 files changed, 127 insertions(+), 115 deletions(-)
+
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index f7e12ba..e0acd00 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -610,6 +610,8 @@ struct drm_i915_gem_object {
+ 	struct list_head list;
+ 	/** This object's place on GPU write list */
+ 	struct list_head gpu_write_list;
++	/** This object's place on eviction list */
++	struct list_head evict_list;
+ 
+ 	/** This object's place on the fenced object LRU */
+ 	struct list_head fence_list;
+diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c
+index 127a28a..84ed1a7 100644
+--- a/drivers/gpu/drm/i915/i915_gem_evict.c
++++ b/drivers/gpu/drm/i915/i915_gem_evict.c
+@@ -31,140 +31,150 @@
+ #include "i915_drv.h"
+ #include "i915_drm.h"
+ 
+-static inline int
+-i915_gem_object_is_purgeable(struct drm_i915_gem_object *obj_priv)
+-{
+-	return obj_priv->madv == I915_MADV_DONTNEED;
+-}
+-
+-static int
+-i915_gem_scan_inactive_list_and_evict(struct drm_device *dev, int min_size,
+-				      unsigned alignment, int *found)
++static struct drm_i915_gem_object *
++i915_gem_next_active_object(struct drm_device *dev,
++			    struct list_head **iter)
+ {
+ 	drm_i915_private_t *dev_priv = dev->dev_private;
+-	struct drm_gem_object *obj;
+-	struct drm_i915_gem_object *obj_priv;
+-	struct drm_gem_object *best = NULL;
+-	struct drm_gem_object *first = NULL;
+-
+-	/* Try to find the smallest clean object */
+-	list_for_each_entry(obj_priv, &dev_priv->mm.inactive_list, list) {
+-		struct drm_gem_object *obj = obj_priv->obj;
+-		if (obj->size >= min_size) {
+-			if ((!obj_priv->dirty ||
+-			     i915_gem_object_is_purgeable(obj_priv)) &&
+-			    (!best || obj->size < best->size)) {
+-				best = obj;
+-				if (best->size == min_size)
+-					break;
+-			}
+-			if (!first)
+-			    first = obj;
+-		}
+-	}
+-
+-	obj = best ? best : first;
+-
+-	if (!obj) {
+-		*found = 0;
+-		return 0;
+-	}
++	struct drm_i915_gem_object *obj = NULL;
+ 
+-	*found = 1;
++	if (*iter != &dev_priv->mm.active_list)
++		obj = list_entry(*iter,
++				 struct drm_i915_gem_object,
++				 list);
+ 
+-#if WATCH_LRU
+-	DRM_INFO("%s: evicting %p\n", __func__, obj);
+-#endif
+-	obj_priv = obj->driver_private;
+-	BUG_ON(obj_priv->pin_count != 0);
+-	BUG_ON(obj_priv->active);
++	*iter = (*iter)->next;
++	return obj;
++}
+ 
+-	/* Wait on the rendering and unbind the buffer. */
+-	return i915_gem_object_unbind(obj);
++static bool
++mark_free(struct drm_i915_gem_object *obj_priv,
++	   struct list_head *unwind)
++{
++	list_add(&obj_priv->evict_list, unwind);
++	return drm_mm_scan_add_block(obj_priv->gtt_space);
+ }
+ 
++#define i915_for_each_active_object(OBJ, I) \
++	*(I) = dev_priv->mm.active_list.next; \
++	while (((OBJ) = i915_gem_next_active_object(dev, (I))) != NULL)
++
+ int
+-i915_gem_evict_something(struct drm_device *dev,
+-			 int min_size, unsigned alignment)
++i915_gem_evict_something(struct drm_device *dev, int min_size, unsigned alignment)
+ {
+ 	drm_i915_private_t *dev_priv = dev->dev_private;
+-	int ret, found;
+-
+-	for (;;) {
+-		i915_gem_retire_requests(dev);
+-
+-		/* If there's an inactive buffer available now, grab it
+-		 * and be done.
+-		 */
+-		ret = i915_gem_scan_inactive_list_and_evict(dev, min_size,
+-							    alignment,
+-							    &found);
+-		if (found)
+-			return ret;
++	struct list_head eviction_list, unwind_list;
++	struct drm_i915_gem_object *obj_priv, *tmp_obj_priv;
++	struct list_head *iter;
++	int ret = 0;
+ 
+-		/* If we didn't get anything, but the ring is still processing
+-		 * things, wait for the next to finish and hopefully leave us
+-		 * a buffer to evict.
+-		 */
+-		if (!list_empty(&dev_priv->mm.request_list)) {
+-			struct drm_i915_gem_request *request;
++	i915_gem_retire_requests(dev);
+ 
+-			request = list_first_entry(&dev_priv->mm.request_list,
+-						   struct drm_i915_gem_request,
+-						   list);
++	/* Re-check for free space after retiring requests */
++	if (drm_mm_search_free(&dev_priv->mm.gtt_space,
++			       min_size, alignment, 0))
++		return 0;
+ 
+-			ret = i915_do_wait_request(dev, request->seqno, true);
+-			if (ret)
+-				return ret;
++	/*
++	 * The goal is to evict objects and amalgamate space in LRU order.
++	 * The oldest idle objects reside on the inactive list, which is in
++	 * retirement order. The next objects to retire are those on the
++	 * active list that do not have an outstanding flush. Once the
++	 * hardware reports completion (the seqno is updated after the
++	 * batchbuffer has been finished) the clean buffer objects would
++	 * be retired to the inactive list. Any dirty objects would be added
++	 * to the tail of the flushing list. So after processing the clean
++	 * active objects we need to emit a MI_FLUSH to retire the flushing
++	 * list, hence the retirement order of the flushing list is in
++	 * advance of the dirty objects on the active list.
++	 *
++	 * The retirement sequence is thus:
++	 *   1. Inactive objects (already retired)
++	 *   2. Clean active objects
++	 *   3. Flushing list
++	 *   4. Dirty active objects.
++	 *
++	 * On each list, the oldest objects lie at the HEAD with the freshest
++	 * object on the TAIL.
++	 */
++
++	INIT_LIST_HEAD(&unwind_list);
++	drm_mm_init_scan(&dev_priv->mm.gtt_space, min_size, alignment);
++
++	/* First see if there is a large enough contiguous idle region... */
++	list_for_each_entry(obj_priv, &dev_priv->mm.inactive_list, list) {
++		if (mark_free(obj_priv, &unwind_list))
++			goto found;
++	}
+ 
++	/* Now merge in the soon-to-be-expired objects... */
++	i915_for_each_active_object(obj_priv, &iter) {
++		/* Does the object require an outstanding flush? */
++		if (obj_priv->obj->write_domain || obj_priv->pin_count)
+ 			continue;
+-		}
+ 
+-		/* If we didn't have anything on the request list but there
+-		 * are buffers awaiting a flush, emit one and try again.
+-		 * When we wait on it, those buffers waiting for that flush
+-		 * will get moved to inactive.
+-		 */
+-		if (!list_empty(&dev_priv->mm.flushing_list)) {
+-			struct drm_gem_object *obj = NULL;
+-			struct drm_i915_gem_object *obj_priv;
+-
+-			/* Find an object that we can immediately reuse */
+-			list_for_each_entry(obj_priv, &dev_priv->mm.flushing_list, list) {
+-				obj = obj_priv->obj;
+-				if (obj->size >= min_size)
+-					break;
+-
+-				obj = NULL;
+-			}
+-
+-			if (obj != NULL) {
+-				uint32_t seqno;
+-
+-				i915_gem_flush(dev,
+-					       obj->write_domain,
+-					       obj->write_domain);
+-				seqno = i915_add_request(dev, NULL, obj->write_domain);
+-				if (seqno == 0)
+-					return -ENOMEM;
+-
+-				ret = i915_do_wait_request(dev, seqno, true);
+-				if (ret)
+-					return ret;
+-
+-				continue;
+-			}
++		if (mark_free(obj_priv, &unwind_list))
++			goto found;
++	}
++
++	/* Finally add anything with a pending flush (in order of retirement) */
++	list_for_each_entry(obj_priv, &dev_priv->mm.flushing_list, list) {
++		if (obj_priv->pin_count)
++			continue;
++
++		if (mark_free(obj_priv, &unwind_list))
++			goto found;
++	}
++	i915_for_each_active_object(obj_priv, &iter) {
++		if (! obj_priv->obj->write_domain || obj_priv->pin_count)
++			continue;
++
++		if (mark_free(obj_priv, &unwind_list))
++			goto found;
++	}
++
++	/* Nothing found, clean up and bail out! */
++	list_for_each_entry(obj_priv, &unwind_list, evict_list) {
++		ret = drm_mm_scan_remove_block(obj_priv->gtt_space);
++		BUG_ON(ret);
++	}
++
++	/* We expect the caller to unpin, evict all and try again, or give up.
++	 * So calling i915_gem_evict_everything() is unnecessary.
++	 */
++	return -ENOSPC;
++
++found:
++	INIT_LIST_HEAD(&eviction_list);
++	list_for_each_entry_safe(obj_priv, tmp_obj_priv,
++				 &unwind_list, evict_list) {
++		if (drm_mm_scan_remove_block(obj_priv->gtt_space)) {
++			/* drm_mm doesn't allow any other other operations while
++			 * scanning, therefore store to be evicted objects on a
++			 * temporary list. */
++			list_move(&obj_priv->evict_list, &eviction_list);
+ 		}
++	}
+ 
+-		/* If we didn't do any of the above, there's no single buffer
+-		 * large enough to swap out for the new one, so just evict
+-		 * everything and start again. (This should be rare.)
+-		 */
+-		if (!list_empty (&dev_priv->mm.inactive_list))
+-			return i915_gem_evict_inactive(dev);
+-		else
+-			return i915_gem_evict_everything(dev);
++	/* Unbinding will emit any required flushes */
++	list_for_each_entry_safe(obj_priv, tmp_obj_priv,
++				 &eviction_list, evict_list) {
++#if WATCH_LRU
++		DRM_INFO("%s: evicting %p\n", __func__, obj);
++#endif
++		ret = i915_gem_object_unbind(obj_priv->obj);
++		if (ret)
++			return ret;
+ 	}
++
++	/* The just created free hole should be on the top of the free stack
++	 * maintained by drm_mm, so this BUG_ON actually executes in O(1).
++	 * Furthermore all accessed data has just recently been used, so it
++	 * should be really fast, too. */
++	BUG_ON(!drm_mm_search_free(&dev_priv->mm.gtt_space, min_size,
++				   alignment, 0));
++
++	return 0;
+ }
+ 
+ int
+-- 
+1.7.2.5
+

Added: dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm-i915-Maintain-LRU-order-of-inactive-objects-upon.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm-i915-Maintain-LRU-order-of-inactive-objects-upon.patch	Sat Aug  6 11:24:48 2011	(r17887)
@@ -0,0 +1,73 @@
+From: Chris Wilson <chris at chris-wilson.co.uk>
+Date: Fri, 17 Jun 2011 10:04:22 -0500
+Subject: [PATCH 06/10] drm/i915: Maintain LRU order of inactive objects upon access by CPU (v2)
+
+commit f8fd3ab5b8bf8f99ea13ebbecabd5c8e42c82948 upstream.
+
+BugLink: http://bugs.launchpad.net/bugs/599017
+
+In order to reduce the penalty of fallbacks under memory pressure and to
+avoid a potential immediate ping-pong of evicting a mmaped buffer, we
+move the object to the tail of the inactive list when a page is freshly
+faulted or the object is moved into the CPU domain.
+
+We choose not to protect the CPU objects from casual eviction,
+preferring to keep the GPU active for as long as possible.
+
+v2: Daniel Vetter found a bug where I forgot that pinned objects are
+kept off the inactive list.
+
+Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
+Signed-off-by: Eric Anholt <eric at anholt.net>
+(backported from commit 7d1c4804ae98cdee572d7d10d8a5deaa2e686285 upstream)
+
+Signed-off-by: Seth Forshee <seth.forshee at canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader at canonical.com>
+---
+ drivers/gpu/drm/i915/i915_gem.c |   16 ++++++++++++++++
+ 1 files changed, 16 insertions(+), 0 deletions(-)
+
+diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
+index 2e4ff69..b3c7bd1 100644
+--- a/drivers/gpu/drm/i915/i915_gem.c
++++ b/drivers/gpu/drm/i915/i915_gem.c
+@@ -55,6 +55,14 @@ static int i915_gem_phys_pwrite(struct drm_device *dev, struct drm_gem_object *o
+ static LIST_HEAD(shrink_list);
+ static DEFINE_SPINLOCK(shrink_list_lock);
+ 
++static inline bool
++i915_gem_object_is_inactive(struct drm_i915_gem_object *obj_priv)
++{
++	return obj_priv->gtt_space &&
++		!obj_priv->active &&
++		obj_priv->pin_count == 0;
++}
++
+ int i915_gem_do_init(struct drm_device *dev, unsigned long start,
+ 		     unsigned long end)
+ {
+@@ -1068,6 +1076,11 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
+ 		ret = i915_gem_object_set_to_cpu_domain(obj, write_domain != 0);
+ 	}
+ 
++	
++	/* Maintain LRU order of "inactive" objects */
++	if (ret == 0 && i915_gem_object_is_inactive(obj_priv))
++		list_move_tail(&obj_priv->list, &dev_priv->mm.inactive_list);
++
+ 	drm_gem_object_unreference(obj);
+ 	mutex_unlock(&dev->struct_mutex);
+ 	return ret;
+@@ -1203,6 +1216,9 @@ int i915_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
+ 			goto unlock;
+ 	}
+ 
++	if (i915_gem_object_is_inactive(obj_priv))
++		list_move_tail(&obj_priv->list, &dev_priv->mm.inactive_list);
++
+ 	pfn = ((dev->agp->base + obj_priv->gtt_offset) >> PAGE_SHIFT) +
+ 		page_offset;
+ 
+-- 
+1.7.2.5
+

Added: dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm-i915-Move-the-eviction-logic-to-its-own-file.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm-i915-Move-the-eviction-logic-to-its-own-file.patch	Sat Aug  6 11:24:48 2011	(r17887)
@@ -0,0 +1,576 @@
+From: Chris Wilson <chris at chris-wilson.co.uk>
+Date: Fri, 17 Jun 2011 10:04:21 -0500
+Subject: [PATCH 04/10] drm/i915: Move the eviction logic to its own file.
+
+commit cf9ec16fcec6fcb0a0ae6d5bcd3f34ff348c683e upstream.
+
+BugLink: http://bugs.launchpad.net/bugs/599017
+
+The eviction code is the gnarly underbelly of memory management, and is
+clearer if kept separated from the normal domain management in GEM.
+
+Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
+Signed-off-by: Eric Anholt <eric at anholt.net>
+(backported from commit b47eb4a2b302f33adaed2a27d2b3bfc74fe35ac5 upstream)
+
+Signed-off-by: Seth Forshee <seth.forshee at canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader at canonical.com>
+---
+ drivers/gpu/drm/i915/Makefile         |    1 +
+ drivers/gpu/drm/i915/i915_drv.h       |   11 ++
+ drivers/gpu/drm/i915/i915_gem.c       |  206 +----------------------------
+ drivers/gpu/drm/i915/i915_gem_evict.c |  235 +++++++++++++++++++++++++++++++++
+ 4 files changed, 249 insertions(+), 204 deletions(-)
+ create mode 100644 drivers/gpu/drm/i915/i915_gem_evict.c
+
+diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
+index 9929f84..8a83bb7 100644
+--- a/drivers/gpu/drm/i915/Makefile
++++ b/drivers/gpu/drm/i915/Makefile
+@@ -8,6 +8,7 @@ i915-y := i915_drv.o i915_dma.o i915_irq.o i915_mem.o \
+           i915_suspend.o \
+ 	  i915_gem.o \
+ 	  i915_gem_debug.o \
++	  i915_gem_evict.o \
+ 	  i915_gem_tiling.o \
+ 	  i915_trace_points.o \
+ 	  intel_display.o \
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index ecc4fbe..f7e12ba 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -55,6 +55,8 @@ enum plane {
+ 
+ #define I915_NUM_PIPE	2
+ 
++#define I915_GEM_GPU_DOMAINS	(~(I915_GEM_DOMAIN_CPU | I915_GEM_DOMAIN_GTT))
++
+ /* Interface history:
+  *
+  * 1.1: Original.
+@@ -858,6 +860,9 @@ int i915_gem_init_object(struct drm_gem_object *obj);
+ void i915_gem_free_object(struct drm_gem_object *obj);
+ int i915_gem_object_pin(struct drm_gem_object *obj, uint32_t alignment);
+ void i915_gem_object_unpin(struct drm_gem_object *obj);
++void i915_gem_flush(struct drm_device *dev,
++		    uint32_t invalidate_domains,
++		    uint32_t flush_domains);
+ int i915_gem_object_unbind(struct drm_gem_object *obj);
+ void i915_gem_release_mmap(struct drm_gem_object *obj);
+ void i915_gem_lastclose(struct drm_device *dev);
+@@ -875,6 +880,7 @@ int i915_gem_init_ringbuffer(struct drm_device *dev);
+ void i915_gem_cleanup_ringbuffer(struct drm_device *dev);
+ int i915_gem_do_init(struct drm_device *dev, unsigned long start,
+ 		     unsigned long end);
++int i915_gpu_idle(struct drm_device *dev);
+ int i915_gem_idle(struct drm_device *dev);
+ uint32_t i915_add_request(struct drm_device *dev, struct drm_file *file_priv,
+ 			  uint32_t flush_domains);
+@@ -896,6 +902,11 @@ void i915_gem_object_flush_write_domain(struct drm_gem_object *obj);
+ void i915_gem_shrinker_init(void);
+ void i915_gem_shrinker_exit(void);
+ 
++/* i915_gem_evict.c */
++int i915_gem_evict_something(struct drm_device *dev, int min_size, unsigned alignment);
++int i915_gem_evict_everything(struct drm_device *dev);
++int i915_gem_evict_inactive(struct drm_device *dev);
++
+ /* i915_gem_tiling.c */
+ void i915_gem_detect_bit_6_swizzle(struct drm_device *dev);
+ void i915_gem_object_do_bit_17_swizzle(struct drm_gem_object *obj);
+diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
+index e0afa05..2e4ff69 100644
+--- a/drivers/gpu/drm/i915/i915_gem.c
++++ b/drivers/gpu/drm/i915/i915_gem.c
+@@ -34,8 +34,6 @@
+ #include <linux/swap.h>
+ #include <linux/pci.h>
+ 
+-#define I915_GEM_GPU_DOMAINS	(~(I915_GEM_DOMAIN_CPU | I915_GEM_DOMAIN_GTT))
+-
+ static uint32_t i915_gem_get_gtt_alignment(struct drm_gem_object *obj);
+ static void i915_gem_object_flush_gpu_write_domain(struct drm_gem_object *obj);
+ static void i915_gem_object_flush_gtt_write_domain(struct drm_gem_object *obj);
+@@ -50,9 +48,6 @@ static int i915_gem_object_wait_rendering(struct drm_gem_object *obj);
+ static int i915_gem_object_bind_to_gtt(struct drm_gem_object *obj,
+ 					   unsigned alignment);
+ static void i915_gem_clear_fence_reg(struct drm_gem_object *obj);
+-static int i915_gem_evict_something(struct drm_device *dev, int min_size,
+-				    unsigned alignment);
+-static int i915_gem_evict_from_inactive_list(struct drm_device *dev);
+ static int i915_gem_phys_pwrite(struct drm_device *dev, struct drm_gem_object *obj,
+ 				struct drm_i915_gem_pwrite *args,
+ 				struct drm_file *file_priv);
+@@ -1927,7 +1922,7 @@ i915_wait_request(struct drm_device *dev, uint32_t seqno)
+ 	return i915_do_wait_request(dev, seqno, 1);
+ }
+ 
+-static void
++void
+ i915_gem_flush(struct drm_device *dev,
+ 	       uint32_t invalidate_domains,
+ 	       uint32_t flush_domains)
+@@ -2105,179 +2100,6 @@ i915_gem_object_unbind(struct drm_gem_object *obj)
+ 	return 0;
+ }
+ 
+-static int
+-i915_gem_scan_inactive_list_and_evict(struct drm_device *dev, int min_size,
+-				      unsigned alignment, int *found)
+-{
+-	drm_i915_private_t *dev_priv = dev->dev_private;
+-	struct drm_gem_object *obj;
+-	struct drm_i915_gem_object *obj_priv;
+-	struct drm_gem_object *best = NULL;
+-	struct drm_gem_object *first = NULL;
+-
+-	/* Try to find the smallest clean object */
+-	list_for_each_entry(obj_priv, &dev_priv->mm.inactive_list, list) {
+-		struct drm_gem_object *obj = obj_priv->obj;
+-		if (obj->size >= min_size) {
+-			if ((!obj_priv->dirty ||
+-			     i915_gem_object_is_purgeable(obj_priv)) &&
+-			    (!best || obj->size < best->size)) {
+-				best = obj;
+-				if (best->size == min_size)
+-					break;
+-			}
+-			if (!first)
+-			    first = obj;
+-		}
+-	}
+-
+-	obj = best ? best : first;
+-
+-	if (!obj) {
+-		*found = 0;
+-		return 0;
+-	}
+-
+-	*found = 1;
+-
+-#if WATCH_LRU
+-	DRM_INFO("%s: evicting %p\n", __func__, obj);
+-#endif
+-	obj_priv = obj->driver_private;
+-	BUG_ON(obj_priv->pin_count != 0);
+-	BUG_ON(obj_priv->active);
+-
+-	/* Wait on the rendering and unbind the buffer. */
+-	return i915_gem_object_unbind(obj);
+-}
+-
+-static int
+-i915_gem_evict_everything(struct drm_device *dev)
+-{
+-	drm_i915_private_t *dev_priv = dev->dev_private;
+-	int ret;
+-	uint32_t seqno;
+-	bool lists_empty;
+-
+-	spin_lock(&dev_priv->mm.active_list_lock);
+-	lists_empty = (list_empty(&dev_priv->mm.inactive_list) &&
+-		       list_empty(&dev_priv->mm.flushing_list) &&
+-		       list_empty(&dev_priv->mm.active_list));
+-	spin_unlock(&dev_priv->mm.active_list_lock);
+-
+-	if (lists_empty)
+-		return -ENOSPC;
+-
+-	/* Flush everything (on to the inactive lists) and evict */
+-	i915_gem_flush(dev, I915_GEM_GPU_DOMAINS, I915_GEM_GPU_DOMAINS);
+-	seqno = i915_add_request(dev, NULL, I915_GEM_GPU_DOMAINS);
+-	if (seqno == 0)
+-		return -ENOMEM;
+-
+-	ret = i915_wait_request(dev, seqno);
+-	if (ret)
+-		return ret;
+-
+-	BUG_ON(!list_empty(&dev_priv->mm.flushing_list));
+-
+-	ret = i915_gem_evict_from_inactive_list(dev);
+-	if (ret)
+-		return ret;
+-
+-	spin_lock(&dev_priv->mm.active_list_lock);
+-	lists_empty = (list_empty(&dev_priv->mm.inactive_list) &&
+-		       list_empty(&dev_priv->mm.flushing_list) &&
+-		       list_empty(&dev_priv->mm.active_list));
+-	spin_unlock(&dev_priv->mm.active_list_lock);
+-	BUG_ON(!lists_empty);
+-
+-	return 0;
+-}
+-
+-static int
+-i915_gem_evict_something(struct drm_device *dev,
+-			 int min_size, unsigned alignment)
+-{
+-	drm_i915_private_t *dev_priv = dev->dev_private;
+-	int ret, found;
+-
+-	for (;;) {
+-		i915_gem_retire_requests(dev);
+-
+-		/* If there's an inactive buffer available now, grab it
+-		 * and be done.
+-		 */
+-		ret = i915_gem_scan_inactive_list_and_evict(dev, min_size,
+-							    alignment,
+-							    &found);
+-		if (found)
+-			return ret;
+-
+-		/* If we didn't get anything, but the ring is still processing
+-		 * things, wait for the next to finish and hopefully leave us
+-		 * a buffer to evict.
+-		 */
+-		if (!list_empty(&dev_priv->mm.request_list)) {
+-			struct drm_i915_gem_request *request;
+-
+-			request = list_first_entry(&dev_priv->mm.request_list,
+-						   struct drm_i915_gem_request,
+-						   list);
+-
+-			ret = i915_wait_request(dev, request->seqno);
+-			if (ret)
+-				return ret;
+-
+-			continue;
+-		}
+-
+-		/* If we didn't have anything on the request list but there
+-		 * are buffers awaiting a flush, emit one and try again.
+-		 * When we wait on it, those buffers waiting for that flush
+-		 * will get moved to inactive.
+-		 */
+-		if (!list_empty(&dev_priv->mm.flushing_list)) {
+-			struct drm_gem_object *obj = NULL;
+-			struct drm_i915_gem_object *obj_priv;
+-
+-			/* Find an object that we can immediately reuse */
+-			list_for_each_entry(obj_priv, &dev_priv->mm.flushing_list, list) {
+-				obj = obj_priv->obj;
+-				if (obj->size >= min_size)
+-					break;
+-
+-				obj = NULL;
+-			}
+-
+-			if (obj != NULL) {
+-				uint32_t seqno;
+-
+-				i915_gem_flush(dev,
+-					       obj->write_domain,
+-					       obj->write_domain);
+-				seqno = i915_add_request(dev, NULL, obj->write_domain);
+-				if (seqno == 0)
+-					return -ENOMEM;
+-
+-				ret = i915_wait_request(dev, seqno);
+-				if (ret)
+-					return ret;
+-
+-				continue;
+-			}
+-		}
+-
+-		/* If we didn't do any of the above, there's no single buffer
+-		 * large enough to swap out for the new one, so just evict
+-		 * everything and start again. (This should be rare.)
+-		 */
+-		if (!list_empty (&dev_priv->mm.inactive_list))
+-			return i915_gem_evict_from_inactive_list(dev);
+-		else
+-			return i915_gem_evict_everything(dev);
+-	}
+-}
+-
+ int
+ i915_gem_object_get_pages(struct drm_gem_object *obj,
+ 			  gfp_t gfpmask)
+@@ -4510,30 +4332,6 @@ void i915_gem_free_object(struct drm_gem_object *obj)
+ 	kfree(obj->driver_private);
+ }
+ 
+-/** Unbinds all inactive objects. */
+-static int
+-i915_gem_evict_from_inactive_list(struct drm_device *dev)
+-{
+-	drm_i915_private_t *dev_priv = dev->dev_private;
+-
+-	while (!list_empty(&dev_priv->mm.inactive_list)) {
+-		struct drm_gem_object *obj;
+-		int ret;
+-
+-		obj = list_first_entry(&dev_priv->mm.inactive_list,
+-				       struct drm_i915_gem_object,
+-				       list)->obj;
+-
+-		ret = i915_gem_object_unbind(obj);
+-		if (ret != 0) {
+-			DRM_ERROR("Error unbinding object: %d\n", ret);
+-			return ret;
+-		}
+-	}
+-
+-	return 0;
+-}
+-
+ int
+ i915_gem_idle(struct drm_device *dev)
+ {
+@@ -4647,7 +4445,7 @@ i915_gem_idle(struct drm_device *dev)
+ 
+ 
+ 	/* Move all inactive buffers out of the GTT. */
+-	ret = i915_gem_evict_from_inactive_list(dev);
++	ret = i915_gem_evict_inactive(dev);
+ 	WARN_ON(!list_empty(&dev_priv->mm.inactive_list));
+ 	if (ret) {
+ 		mutex_unlock(&dev->struct_mutex);
+diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c
+new file mode 100644
+index 0000000..127a28a
+--- /dev/null
++++ b/drivers/gpu/drm/i915/i915_gem_evict.c
+@@ -0,0 +1,235 @@
++/*
++ * Copyright © 2008-2010 Intel Corporation
++ *
++ * Permission is hereby granted, free of charge, to any person obtaining a
++ * copy of this software and associated documentation files (the "Software"),
++ * to deal in the Software without restriction, including without limitation
++ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
++ * and/or sell copies of the Software, and to permit persons to whom the
++ * Software is furnished to do so, subject to the following conditions:
++ *
++ * The above copyright notice and this permission notice (including the next
++ * paragraph) shall be included in all copies or substantial portions of the
++ * Software.
++ *
++ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
++ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
++ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
++ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
++ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
++ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
++ * IN THE SOFTWARE.
++ *
++ * Authors:
++ *    Eric Anholt <eric at anholt.net>
++ *    Chris Wilson <chris at chris-wilson.co.uuk>
++ *
++ */
++
++#include "drmP.h"
++#include "drm.h"
++#include "i915_drv.h"
++#include "i915_drm.h"
++
++static inline int
++i915_gem_object_is_purgeable(struct drm_i915_gem_object *obj_priv)
++{
++	return obj_priv->madv == I915_MADV_DONTNEED;
++}
++
++static int
++i915_gem_scan_inactive_list_and_evict(struct drm_device *dev, int min_size,
++				      unsigned alignment, int *found)
++{
++	drm_i915_private_t *dev_priv = dev->dev_private;
++	struct drm_gem_object *obj;
++	struct drm_i915_gem_object *obj_priv;
++	struct drm_gem_object *best = NULL;
++	struct drm_gem_object *first = NULL;
++
++	/* Try to find the smallest clean object */
++	list_for_each_entry(obj_priv, &dev_priv->mm.inactive_list, list) {
++		struct drm_gem_object *obj = obj_priv->obj;
++		if (obj->size >= min_size) {
++			if ((!obj_priv->dirty ||
++			     i915_gem_object_is_purgeable(obj_priv)) &&
++			    (!best || obj->size < best->size)) {
++				best = obj;
++				if (best->size == min_size)
++					break;
++			}
++			if (!first)
++			    first = obj;
++		}
++	}
++
++	obj = best ? best : first;
++
++	if (!obj) {
++		*found = 0;
++		return 0;
++	}
++
++	*found = 1;
++
++#if WATCH_LRU
++	DRM_INFO("%s: evicting %p\n", __func__, obj);
++#endif
++	obj_priv = obj->driver_private;
++	BUG_ON(obj_priv->pin_count != 0);
++	BUG_ON(obj_priv->active);
++
++	/* Wait on the rendering and unbind the buffer. */
++	return i915_gem_object_unbind(obj);
++}
++
++int
++i915_gem_evict_something(struct drm_device *dev,
++			 int min_size, unsigned alignment)
++{
++	drm_i915_private_t *dev_priv = dev->dev_private;
++	int ret, found;
++
++	for (;;) {
++		i915_gem_retire_requests(dev);
++
++		/* If there's an inactive buffer available now, grab it
++		 * and be done.
++		 */
++		ret = i915_gem_scan_inactive_list_and_evict(dev, min_size,
++							    alignment,
++							    &found);
++		if (found)
++			return ret;
++
++		/* If we didn't get anything, but the ring is still processing
++		 * things, wait for the next to finish and hopefully leave us
++		 * a buffer to evict.
++		 */
++		if (!list_empty(&dev_priv->mm.request_list)) {
++			struct drm_i915_gem_request *request;
++
++			request = list_first_entry(&dev_priv->mm.request_list,
++						   struct drm_i915_gem_request,
++						   list);
++
++			ret = i915_do_wait_request(dev, request->seqno, true);
++			if (ret)
++				return ret;
++
++			continue;
++		}
++
++		/* If we didn't have anything on the request list but there
++		 * are buffers awaiting a flush, emit one and try again.
++		 * When we wait on it, those buffers waiting for that flush
++		 * will get moved to inactive.
++		 */
++		if (!list_empty(&dev_priv->mm.flushing_list)) {
++			struct drm_gem_object *obj = NULL;
++			struct drm_i915_gem_object *obj_priv;
++
++			/* Find an object that we can immediately reuse */
++			list_for_each_entry(obj_priv, &dev_priv->mm.flushing_list, list) {
++				obj = obj_priv->obj;
++				if (obj->size >= min_size)
++					break;
++
++				obj = NULL;
++			}
++
++			if (obj != NULL) {
++				uint32_t seqno;
++
++				i915_gem_flush(dev,
++					       obj->write_domain,
++					       obj->write_domain);
++				seqno = i915_add_request(dev, NULL, obj->write_domain);
++				if (seqno == 0)
++					return -ENOMEM;
++
++				ret = i915_do_wait_request(dev, seqno, true);
++				if (ret)
++					return ret;
++
++				continue;
++			}
++		}
++
++		/* If we didn't do any of the above, there's no single buffer
++		 * large enough to swap out for the new one, so just evict
++		 * everything and start again. (This should be rare.)
++		 */
++		if (!list_empty (&dev_priv->mm.inactive_list))
++			return i915_gem_evict_inactive(dev);
++		else
++			return i915_gem_evict_everything(dev);
++	}
++}
++
++int
++i915_gem_evict_everything(struct drm_device *dev)
++{
++	drm_i915_private_t *dev_priv = dev->dev_private;
++	int ret;
++	uint32_t seqno;
++	bool lists_empty;
++
++	spin_lock(&dev_priv->mm.active_list_lock);
++	lists_empty = (list_empty(&dev_priv->mm.inactive_list) &&
++		       list_empty(&dev_priv->mm.flushing_list) &&
++		       list_empty(&dev_priv->mm.active_list));
++	spin_unlock(&dev_priv->mm.active_list_lock);
++
++	if (lists_empty)
++		return -ENOSPC;
++
++	/* Flush everything (on to the inactive lists) and evict */
++	i915_gem_flush(dev, I915_GEM_GPU_DOMAINS, I915_GEM_GPU_DOMAINS);
++	seqno = i915_add_request(dev, NULL, I915_GEM_GPU_DOMAINS);
++	if (seqno == 0)
++		return -ENOMEM;
++
++	ret = i915_do_wait_request(dev, seqno, true);
++	if (ret)
++		return ret;
++
++	BUG_ON(!list_empty(&dev_priv->mm.flushing_list));
++
++	ret = i915_gem_evict_inactive(dev);
++	if (ret)
++		return ret;
++
++	spin_lock(&dev_priv->mm.active_list_lock);
++	lists_empty = (list_empty(&dev_priv->mm.inactive_list) &&
++		       list_empty(&dev_priv->mm.flushing_list) &&
++		       list_empty(&dev_priv->mm.active_list));
++	spin_unlock(&dev_priv->mm.active_list_lock);
++	BUG_ON(!lists_empty);
++
++	return 0;
++}
++
++/** Unbinds all inactive objects. */
++int
++i915_gem_evict_inactive(struct drm_device *dev)
++{
++	drm_i915_private_t *dev_priv = dev->dev_private;
++
++	while (!list_empty(&dev_priv->mm.inactive_list)) {
++		struct drm_gem_object *obj;
++		int ret;
++
++		obj = list_first_entry(&dev_priv->mm.inactive_list,
++				       struct drm_i915_gem_object,
++				       list)->obj;
++
++		ret = i915_gem_object_unbind(obj);
++		if (ret != 0) {
++			DRM_ERROR("Error unbinding object: %d\n", ret);
++			return ret;
++		}
++	}
++
++	return 0;
++}
+-- 
+1.7.2.5
+

Added: dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm-i915-Periodically-flush-the-active-lists-and-req.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm-i915-Periodically-flush-the-active-lists-and-req.patch	Sat Aug  6 11:24:48 2011	(r17887)
@@ -0,0 +1,48 @@
+From: Chris Wilson <chris at chris-wilson.co.uk>
+Date: Fri, 17 Jun 2011 10:04:22 -0500
+Subject: [PATCH 08/10] drm/i915: Periodically flush the active lists and requests
+
+commit 41516474bc14ea128b05bf65c9cbdb04739582ac upstream.
+
+BugLink: http://bugs.launchpad.net/bugs/599017
+
+In order to retire active buffers whilst no client is active, we need to
+insert our own flush requests onto the ring.
+
+This is useful for servers that queue up some rendering and then go to
+sleep as it allows us to the complete processing of those requests,
+potentially making that memory available again much earlier.
+
+Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
+(backported from commit 0a58705b2fc3fa29525cf2fdae3d4276a5771280 upstream)
+
+Signed-off-by: Seth Forshee <seth.forshee at canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader at canonical.com>
+---
+ drivers/gpu/drm/i915/i915_gem.c |    7 +++++++
+ 1 files changed, 7 insertions(+), 0 deletions(-)
+
+diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
+index b3c7bd1..0314f7f 100644
+--- a/drivers/gpu/drm/i915/i915_gem.c
++++ b/drivers/gpu/drm/i915/i915_gem.c
+@@ -1862,9 +1862,16 @@ i915_gem_retire_work_handler(struct work_struct *work)
+ 
+ 	mutex_lock(&dev->struct_mutex);
+ 	i915_gem_retire_requests(dev);
++
++	if (!list_empty(&dev_priv->mm.gpu_write_list)) {
++		i915_gem_flush(dev, 0, I915_GEM_GPU_DOMAINS);
++		i915_add_request(dev, NULL, I915_GEM_GPU_DOMAINS);
++	}
++
+ 	if (!dev_priv->mm.suspended &&
+ 	    !list_empty(&dev_priv->mm.request_list))
+ 		queue_delayed_work(dev_priv->wq, &dev_priv->mm.retire_work, HZ);
++
+ 	mutex_unlock(&dev->struct_mutex);
+ }
+ 
+-- 
+1.7.2.5
+

Added: dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm-i915-evict-Ensure-we-completely-cleanup-on-failu.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm-i915-evict-Ensure-we-completely-cleanup-on-failu.patch	Sat Aug  6 11:24:48 2011	(r17887)
@@ -0,0 +1,86 @@
+nrom: Chris Wilson <chris at chris-wilson.co.uk>
+Date: Fri, 17 Jun 2011 10:04:22 -0500
+Subject: [PATCH 07/10] drm/i915/evict: Ensure we completely cleanup on failure
+
+commit 2f4f8bc3da84232a25e0ced165d4bb5643d3aaad upstream.
+
+BugLink: http://bugs.launchpad.net/bugs/599017
+
+... and not leave the objects in a inconsistent state.
+
+[seth.forshee at canonical.com: Also backported similar cleanups in success
+ path from commit e39a01501b228e1be2037d5bddccae2a820af902]
+Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
+Cc: stable at kernel.org
+(backported from commit 092de6f225638ec300936bfcbdc67805733cc78c upstream)
+
+Signed-off-by: Seth Forshee <seth.forshee at canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader at canonical.com>
+---
+ drivers/gpu/drm/i915/i915_gem_evict.c |   32 ++++++++++++++++++++------------
+ 1 files changed, 20 insertions(+), 12 deletions(-)
+
+diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c
+index 84ed1a7..9c1ec78 100644
+--- a/drivers/gpu/drm/i915/i915_gem_evict.c
++++ b/drivers/gpu/drm/i915/i915_gem_evict.c
+@@ -134,9 +134,15 @@ i915_gem_evict_something(struct drm_device *dev, int min_size, unsigned alignmen
+ 	}
+ 
+ 	/* Nothing found, clean up and bail out! */
+-	list_for_each_entry(obj_priv, &unwind_list, evict_list) {
++	while (!list_empty(&unwind_list)) {
++		obj_priv = list_first_entry(&unwind_list,
++					    struct drm_i915_gem_object,
++					    evict_list);
++
+ 		ret = drm_mm_scan_remove_block(obj_priv->gtt_space);
+ 		BUG_ON(ret);
++
++		list_del_init(&obj_priv->evict_list);
+ 	}
+ 
+ 	/* We expect the caller to unpin, evict all and try again, or give up.
+@@ -145,26 +151,28 @@ i915_gem_evict_something(struct drm_device *dev, int min_size, unsigned alignmen
+ 	return -ENOSPC;
+ 
+ found:
++	/* drm_mm doesn't allow any other other operations while
++	 * scanning, therefore store to be evicted objects on a
++	 * temporary list. */
+ 	INIT_LIST_HEAD(&eviction_list);
+ 	list_for_each_entry_safe(obj_priv, tmp_obj_priv,
+ 				 &unwind_list, evict_list) {
+ 		if (drm_mm_scan_remove_block(obj_priv->gtt_space)) {
+-			/* drm_mm doesn't allow any other other operations while
+-			 * scanning, therefore store to be evicted objects on a
+-			 * temporary list. */
+ 			list_move(&obj_priv->evict_list, &eviction_list);
++			continue;
+ 		}
++		list_del_init(&obj_priv->evict_list);
+ 	}
+ 
+ 	/* Unbinding will emit any required flushes */
+-	list_for_each_entry_safe(obj_priv, tmp_obj_priv,
+-				 &eviction_list, evict_list) {
+-#if WATCH_LRU
+-		DRM_INFO("%s: evicting %p\n", __func__, obj);
+-#endif
+-		ret = i915_gem_object_unbind(obj_priv->obj);
+-		if (ret)
+-			return ret;
++	while (!list_empty(&eviction_list)) {
++		obj_priv = list_first_entry(&eviction_list,
++					    struct drm_i915_gem_object,
++					    evict_list);
++		if (ret == 0)
++			ret = i915_gem_object_unbind(obj_priv->obj);
++
++		list_del_init(&obj_priv->evict_list);
+ 	}
+ 
+ 	/* The just created free hole should be on the top of the free stack
+-- 
+1.7.2.5
+

Added: dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm-i915-prepare-for-fair-lru-eviction.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm-i915-prepare-for-fair-lru-eviction.patch	Sat Aug  6 11:24:48 2011	(r17887)
@@ -0,0 +1,192 @@
+From: Daniel Vetter <daniel.vetter at ffwll.ch>
+Date: Fri, 17 Jun 2011 10:04:20 -0500
+Subject: [PATCH 03/10] drm/i915: prepare for fair lru eviction
+
+commit f07147fcefea6d203882c570d61bdf73dd25ae66 upstream.
+
+BugLink: http://bugs.launchpad.net/bugs/599017
+
+This does two little changes:
+
+- Add an alignment parameter for evict_something. It's not really great to
+  whack a carefully sized hole into the gtt with the wrong alignment.
+  Especially since the fallback path is a full evict.
+
+- With the inactive scan stuff we need to evict more that one object, so
+  move the unbind call into the helper function that scans for the object
+  to be evicted, too.  And adjust its name.
+
+No functional changes in this patch, just preparation.
+
+Signed-Off-by: Daniel Vetter <daniel.vetter at ffwll.ch>
+Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
+Signed-off-by: Eric Anholt <eric at anholt.net>
+(backported from commit 0108a3edd5c2e3b150a550d565b6aa1a67c0edbe upstream)
+
+Signed-off-by: Seth Forshee <seth.forshee at canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader at canonical.com>
+---
+ drivers/gpu/drm/i915/i915_gem.c |   67 ++++++++++++++++++++++++---------------
+ 1 files changed, 41 insertions(+), 26 deletions(-)
+
+diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
+index a34fd44..e0afa05 100644
+--- a/drivers/gpu/drm/i915/i915_gem.c
++++ b/drivers/gpu/drm/i915/i915_gem.c
+@@ -36,6 +36,7 @@
+ 
+ #define I915_GEM_GPU_DOMAINS	(~(I915_GEM_DOMAIN_CPU | I915_GEM_DOMAIN_GTT))
+ 
++static uint32_t i915_gem_get_gtt_alignment(struct drm_gem_object *obj);
+ static void i915_gem_object_flush_gpu_write_domain(struct drm_gem_object *obj);
+ static void i915_gem_object_flush_gtt_write_domain(struct drm_gem_object *obj);
+ static void i915_gem_object_flush_cpu_write_domain(struct drm_gem_object *obj);
+@@ -49,7 +50,8 @@ static int i915_gem_object_wait_rendering(struct drm_gem_object *obj);
+ static int i915_gem_object_bind_to_gtt(struct drm_gem_object *obj,
+ 					   unsigned alignment);
+ static void i915_gem_clear_fence_reg(struct drm_gem_object *obj);
+-static int i915_gem_evict_something(struct drm_device *dev, int min_size);
++static int i915_gem_evict_something(struct drm_device *dev, int min_size,
++				    unsigned alignment);
+ static int i915_gem_evict_from_inactive_list(struct drm_device *dev);
+ static int i915_gem_phys_pwrite(struct drm_device *dev, struct drm_gem_object *obj,
+ 				struct drm_i915_gem_pwrite *args,
+@@ -334,7 +336,8 @@ i915_gem_object_get_pages_or_evict(struct drm_gem_object *obj)
+ 	if (ret == -ENOMEM) {
+ 		struct drm_device *dev = obj->dev;
+ 
+-		ret = i915_gem_evict_something(dev, obj->size);
++		ret = i915_gem_evict_something(dev, obj->size,
++					       i915_gem_get_gtt_alignment(obj));
+ 		if (ret)
+ 			return ret;
+ 
+@@ -2102,10 +2105,12 @@ i915_gem_object_unbind(struct drm_gem_object *obj)
+ 	return 0;
+ }
+ 
+-static struct drm_gem_object *
+-i915_gem_find_inactive_object(struct drm_device *dev, int min_size)
++static int
++i915_gem_scan_inactive_list_and_evict(struct drm_device *dev, int min_size,
++				      unsigned alignment, int *found)
+ {
+ 	drm_i915_private_t *dev_priv = dev->dev_private;
++	struct drm_gem_object *obj;
+ 	struct drm_i915_gem_object *obj_priv;
+ 	struct drm_gem_object *best = NULL;
+ 	struct drm_gem_object *first = NULL;
+@@ -2119,14 +2124,31 @@ i915_gem_find_inactive_object(struct drm_device *dev, int min_size)
+ 			    (!best || obj->size < best->size)) {
+ 				best = obj;
+ 				if (best->size == min_size)
+-					return best;
++					break;
+ 			}
+ 			if (!first)
+ 			    first = obj;
+ 		}
+ 	}
+ 
+-	return best ? best : first;
++	obj = best ? best : first;
++
++	if (!obj) {
++		*found = 0;
++		return 0;
++	}
++
++	*found = 1;
++
++#if WATCH_LRU
++	DRM_INFO("%s: evicting %p\n", __func__, obj);
++#endif
++	obj_priv = obj->driver_private;
++	BUG_ON(obj_priv->pin_count != 0);
++	BUG_ON(obj_priv->active);
++
++	/* Wait on the rendering and unbind the buffer. */
++	return i915_gem_object_unbind(obj);
+ }
+ 
+ static int
+@@ -2173,11 +2195,11 @@ i915_gem_evict_everything(struct drm_device *dev)
+ }
+ 
+ static int
+-i915_gem_evict_something(struct drm_device *dev, int min_size)
++i915_gem_evict_something(struct drm_device *dev,
++			 int min_size, unsigned alignment)
+ {
+ 	drm_i915_private_t *dev_priv = dev->dev_private;
+-	struct drm_gem_object *obj;
+-	int ret;
++	int ret, found;
+ 
+ 	for (;;) {
+ 		i915_gem_retire_requests(dev);
+@@ -2185,20 +2207,11 @@ i915_gem_evict_something(struct drm_device *dev, int min_size)
+ 		/* If there's an inactive buffer available now, grab it
+ 		 * and be done.
+ 		 */
+-		obj = i915_gem_find_inactive_object(dev, min_size);
+-		if (obj) {
+-			struct drm_i915_gem_object *obj_priv;
+-
+-#if WATCH_LRU
+-			DRM_INFO("%s: evicting %p\n", __func__, obj);
+-#endif
+-			obj_priv = obj->driver_private;
+-			BUG_ON(obj_priv->pin_count != 0);
+-			BUG_ON(obj_priv->active);
+-
+-			/* Wait on the rendering and unbind the buffer. */
+-			return i915_gem_object_unbind(obj);
+-		}
++		ret = i915_gem_scan_inactive_list_and_evict(dev, min_size,
++							    alignment,
++							    &found);
++		if (found)
++			return ret;
+ 
+ 		/* If we didn't get anything, but the ring is still processing
+ 		 * things, wait for the next to finish and hopefully leave us
+@@ -2224,6 +2237,7 @@ i915_gem_evict_something(struct drm_device *dev, int min_size)
+ 		 * will get moved to inactive.
+ 		 */
+ 		if (!list_empty(&dev_priv->mm.flushing_list)) {
++			struct drm_gem_object *obj = NULL;
+ 			struct drm_i915_gem_object *obj_priv;
+ 
+ 			/* Find an object that we can immediately reuse */
+@@ -2672,7 +2686,7 @@ i915_gem_object_bind_to_gtt(struct drm_gem_object *obj, unsigned alignment)
+ #if WATCH_LRU
+ 		DRM_INFO("%s: GTT full, evicting something\n", __func__);
+ #endif
+-		ret = i915_gem_evict_something(dev, obj->size);
++		ret = i915_gem_evict_something(dev, obj->size, alignment);
+ 		if (ret)
+ 			return ret;
+ 
+@@ -2690,7 +2704,8 @@ i915_gem_object_bind_to_gtt(struct drm_gem_object *obj, unsigned alignment)
+ 
+ 		if (ret == -ENOMEM) {
+ 			/* first try to clear up some space from the GTT */
+-			ret = i915_gem_evict_something(dev, obj->size);
++			ret = i915_gem_evict_something(dev, obj->size,
++						       alignment);
+ 			if (ret) {
+ 				/* now try to shrink everyone else */
+ 				if (gfpmask) {
+@@ -2720,7 +2735,7 @@ i915_gem_object_bind_to_gtt(struct drm_gem_object *obj, unsigned alignment)
+ 		drm_mm_put_block(obj_priv->gtt_space);
+ 		obj_priv->gtt_space = NULL;
+ 
+-		ret = i915_gem_evict_something(dev, obj->size);
++		ret = i915_gem_evict_something(dev, obj->size, alignment);
+ 		if (ret)
+ 			return ret;
+ 
+-- 
+1.7.2.5
+

Added: dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm-implement-helper-functions-for-scanning-lru-list.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm-implement-helper-functions-for-scanning-lru-list.patch	Sat Aug  6 11:24:48 2011	(r17887)
@@ -0,0 +1,309 @@
+From: Daniel Vetter <daniel.vetter at ffwll.ch>
+Date: Fri, 17 Jun 2011 10:04:20 -0500
+Subject: [PATCH 02/10] drm: implement helper functions for scanning lru list
+
+commit be54bbcaee8559cc412b5e4abc8eb33388b083e0 upstream.
+
+BugLink: http://bugs.launchpad.net/bugs/599017
+
+These helper functions can be used to efficiently scan lru list
+for eviction. Eviction becomes a three stage process:
+1. Scanning through the lru list until a suitable hole has been found.
+2. Scan backwards to restore drm_mm consistency and find out which
+   objects fall into the hole.
+3. Evict the objects that fall into the hole.
+
+These helper functions don't allocate any memory (at the price of
+not allowing any other concurrent operations). Hence this can also be
+used for ttm (which does lru scanning under a spinlock).
+
+Evicting objects in this fashion should be more fair than the current
+approach by i915 (scan the lru for a object large enough to contain
+the new object). It's also more efficient than the current approach used
+by ttm (uncoditionally evict objects from the lru until there's enough
+free space).
+
+Signed-Off-by: Daniel Vetter <daniel.vetter at ffwll.ch>
+Acked-by: Thomas Hellstrom <thellstrom at vmwgfx.com>
+Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
+Signed-off-by: Dave Airlie <airlied at redhat.com>
+(backported from commit 709ea97145c125b3811ff70429e90ebdb0e832e5 upstream)
+
+Signed-off-by: Seth Forshee <seth.forshee at canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader at canonical.com>
+---
+ drivers/gpu/drm/drm_mm.c |  167 ++++++++++++++++++++++++++++++++++++++++++++-
+ include/drm/drm_mm.h     |   15 ++++-
+ 2 files changed, 177 insertions(+), 5 deletions(-)
+
+diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c
+index 4935e91..f1d3314 100644
+--- a/drivers/gpu/drm/drm_mm.c
++++ b/drivers/gpu/drm/drm_mm.c
+@@ -83,9 +83,9 @@ static struct drm_mm_node *drm_mm_kmalloc(struct drm_mm *mm, int atomic)
+ 	struct drm_mm_node *child;
+ 
+ 	if (atomic)
+-		child = kmalloc(sizeof(*child), GFP_ATOMIC);
++		child = kzalloc(sizeof(*child), GFP_ATOMIC);
+ 	else
+-		child = kmalloc(sizeof(*child), GFP_KERNEL);
++		child = kzalloc(sizeof(*child), GFP_KERNEL);
+ 
+ 	if (unlikely(child == NULL)) {
+ 		spin_lock(&mm->unused_lock);
+@@ -115,7 +115,7 @@ int drm_mm_pre_get(struct drm_mm *mm)
+ 	spin_lock(&mm->unused_lock);
+ 	while (mm->num_unused < MM_UNUSED_TARGET) {
+ 		spin_unlock(&mm->unused_lock);
+-		node = kmalloc(sizeof(*node), GFP_KERNEL);
++		node = kzalloc(sizeof(*node), GFP_KERNEL);
+ 		spin_lock(&mm->unused_lock);
+ 
+ 		if (unlikely(node == NULL)) {
+@@ -179,7 +179,6 @@ static struct drm_mm_node *drm_mm_split_at_start(struct drm_mm_node *parent,
+ 
+ 	INIT_LIST_HEAD(&child->fl_entry);
+ 
+-	child->free = 0;
+ 	child->size = size;
+ 	child->start = parent->start;
+ 	child->mm = parent->mm;
+@@ -280,6 +279,9 @@ void drm_mm_put_block(struct drm_mm_node *cur)
+ 
+ 	int merged = 0;
+ 
++	BUG_ON(cur->scanned_block || cur->scanned_prev_free
++				  || cur->scanned_next_free);
++
+ 	if (cur_head->prev != root_head) {
+ 		prev_node =
+ 		    list_entry(cur_head->prev, struct drm_mm_node, ml_entry);
+@@ -359,6 +361,8 @@ struct drm_mm_node *drm_mm_search_free(const struct drm_mm *mm,
+ 	struct drm_mm_node *best;
+ 	unsigned long best_size;
+ 
++	BUG_ON(mm->scanned_blocks);
++
+ 	best = NULL;
+ 	best_size = ~0UL;
+ 
+@@ -394,6 +398,8 @@ struct drm_mm_node *drm_mm_search_free_in_range(const struct drm_mm *mm,
+ 	struct drm_mm_node *best;
+ 	unsigned long best_size;
+ 
++	BUG_ON(mm->scanned_blocks);
++
+ 	best = NULL;
+ 	best_size = ~0UL;
+ 
+@@ -419,6 +425,158 @@ struct drm_mm_node *drm_mm_search_free_in_range(const struct drm_mm *mm,
+ }
+ EXPORT_SYMBOL(drm_mm_search_free_in_range);
+ 
++/**
++ * Initializa lru scanning.
++ *
++ * This simply sets up the scanning routines with the parameters for the desired
++ * hole.
++ *
++ * Warning: As long as the scan list is non-empty, no other operations than
++ * adding/removing nodes to/from the scan list are allowed.
++ */
++void drm_mm_init_scan(struct drm_mm *mm, unsigned long size,
++		      unsigned alignment)
++{
++	mm->scan_alignment = alignment;
++	mm->scan_size = size;
++	mm->scanned_blocks = 0;
++	mm->scan_hit_start = 0;
++	mm->scan_hit_size = 0;
++}
++EXPORT_SYMBOL(drm_mm_init_scan);
++
++/**
++ * Add a node to the scan list that might be freed to make space for the desired
++ * hole.
++ *
++ * Returns non-zero, if a hole has been found, zero otherwise.
++ */
++int drm_mm_scan_add_block(struct drm_mm_node *node)
++{
++	struct drm_mm *mm = node->mm;
++	struct list_head *prev_free, *next_free;
++	struct drm_mm_node *prev_node, *next_node;
++
++	mm->scanned_blocks++;
++
++	prev_free = next_free = NULL;
++
++	BUG_ON(node->free);
++	node->scanned_block = 1;
++	node->free = 1;
++
++	if (node->ml_entry.prev != &mm->ml_entry) {
++		prev_node = list_entry(node->ml_entry.prev, struct drm_mm_node,
++				       ml_entry);
++
++		if (prev_node->free) {
++			list_del(&prev_node->ml_entry);
++
++			node->start = prev_node->start;
++			node->size += prev_node->size;
++
++			prev_node->scanned_prev_free = 1;
++
++			prev_free = &prev_node->fl_entry;
++		}
++	}
++
++	if (node->ml_entry.next != &mm->ml_entry) {
++		next_node = list_entry(node->ml_entry.next, struct drm_mm_node,
++				       ml_entry);
++
++		if (next_node->free) {
++			list_del(&next_node->ml_entry);
++
++			node->size += next_node->size;
++
++			next_node->scanned_next_free = 1;
++
++			next_free = &next_node->fl_entry;
++		}
++	}
++
++	/* The fl_entry list is not used for allocated objects, so these two
++	 * pointers can be abused (as long as no allocations in this memory
++	 * manager happens). */
++	node->fl_entry.prev = prev_free;
++	node->fl_entry.next = next_free;
++
++	if (check_free_mm_node(node, mm->scan_size, mm->scan_alignment)) {
++		mm->scan_hit_start = node->start;
++		mm->scan_hit_size = node->size;
++
++		return 1;
++	}
++
++	return 0;
++}
++EXPORT_SYMBOL(drm_mm_scan_add_block);
++
++/**
++ * Remove a node from the scan list.
++ *
++ * Nodes _must_ be removed in the exact same order from the scan list as they
++ * have been added, otherwise the internal state of the memory manager will be
++ * corrupted.
++ *
++ * When the scan list is empty, the selected memory nodes can be freed. An
++ * immediatly following drm_mm_search_free with best_match = 0 will then return
++ * the just freed block (because its at the top of the fl_entry list).
++ *
++ * Returns one if this block should be evicted, zero otherwise. Will always
++ * return zero when no hole has been found.
++ */
++int drm_mm_scan_remove_block(struct drm_mm_node *node)
++{
++	struct drm_mm *mm = node->mm;
++	struct drm_mm_node *prev_node, *next_node;
++
++	mm->scanned_blocks--;
++
++	BUG_ON(!node->scanned_block);
++	node->scanned_block = 0;
++	node->free = 0;
++
++	prev_node = list_entry(node->fl_entry.prev, struct drm_mm_node,
++			       fl_entry);
++	next_node = list_entry(node->fl_entry.next, struct drm_mm_node,
++			       fl_entry);
++
++	if (prev_node) {
++		BUG_ON(!prev_node->scanned_prev_free);
++		prev_node->scanned_prev_free = 0;
++
++		list_add_tail(&prev_node->ml_entry, &node->ml_entry);
++
++		node->start = prev_node->start + prev_node->size;
++		node->size -= prev_node->size;
++	}
++
++	if (next_node) {
++		BUG_ON(!next_node->scanned_next_free);
++		next_node->scanned_next_free = 0;
++
++		list_add(&next_node->ml_entry, &node->ml_entry);
++
++		node->size -= next_node->size;
++	}
++
++	INIT_LIST_HEAD(&node->fl_entry);
++
++	/* Only need to check for containement because start&size for the
++	 * complete resulting free block (not just the desired part) is
++	 * stored. */
++	if (node->start >= mm->scan_hit_start &&
++	    node->start + node->size
++	    		<= mm->scan_hit_start + mm->scan_hit_size) {
++		return 1;
++	}
++
++	return 0;
++}
++EXPORT_SYMBOL(drm_mm_scan_remove_block);
++
+ int drm_mm_clean(struct drm_mm * mm)
+ {
+ 	struct list_head *head = &mm->ml_entry;
+@@ -433,6 +591,7 @@ int drm_mm_init(struct drm_mm * mm, unsigned long start, unsigned long size)
+ 	INIT_LIST_HEAD(&mm->fl_entry);
+ 	INIT_LIST_HEAD(&mm->unused_nodes);
+ 	mm->num_unused = 0;
++	mm->scanned_blocks = 0;
+ 	spin_lock_init(&mm->unused_lock);
+ 
+ 	return drm_mm_create_tail_node(mm, start, size, 0);
+diff --git a/include/drm/drm_mm.h b/include/drm/drm_mm.h
+index 4c10be3..83a7495 100644
+--- a/include/drm/drm_mm.h
++++ b/include/drm/drm_mm.h
+@@ -44,7 +44,10 @@
+ struct drm_mm_node {
+ 	struct list_head fl_entry;
+ 	struct list_head ml_entry;
+-	int free;
++	unsigned free : 1;
++	unsigned scanned_block : 1;
++	unsigned scanned_prev_free : 1;
++	unsigned scanned_next_free : 1;
+ 	unsigned long start;
+ 	unsigned long size;
+ 	struct drm_mm *mm;
+@@ -57,6 +60,11 @@ struct drm_mm {
+ 	struct list_head unused_nodes;
+ 	int num_unused;
+ 	spinlock_t unused_lock;
++	unsigned scan_alignment;
++	unsigned long scan_size;
++	unsigned long scan_hit_start;
++	unsigned scan_hit_size;
++	unsigned scanned_blocks;
+ };
+ 
+ /*
+@@ -133,6 +141,11 @@ static inline struct drm_mm *drm_get_mm(struct drm_mm_node *block)
+ 	return block->mm;
+ }
+ 
++void drm_mm_init_scan(struct drm_mm *mm, unsigned long size,
++		      unsigned alignment);
++int drm_mm_scan_add_block(struct drm_mm_node *node);
++int drm_mm_scan_remove_block(struct drm_mm_node *node);
++
+ extern void drm_mm_debug_table(struct drm_mm *mm, const char *prefix);
+ #ifdef CONFIG_DEBUG_FS
+ int drm_mm_dump_table(struct seq_file *m, struct drm_mm *mm);
+-- 
+1.7.2.5
+

Added: dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm-radeon-kms-fix-for-radeon-on-systems-4GB-without.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm-radeon-kms-fix-for-radeon-on-systems-4GB-without.patch	Sat Aug  6 11:24:48 2011	(r17887)
@@ -0,0 +1,46 @@
+From: Daniel Haid <d.haid at gogi.tv>
+Date: Wed, 8 Jun 2011 20:04:45 +1000
+Subject: [PATCH 10/10] drm/radeon/kms: fix for radeon on systems >4GB without hardware iommu
+
+commit 2e49607f2fdfe966ea6caae27c1e7547b917ccb7 upstream.
+
+commit 62fff811d73095bd95579d72f558f03c78f7914a upstream.
+
+On my x86_64 system with >4GB of ram and swiotlb instead of
+a hardware iommu (because I have a VIA chipset), the call
+to pci_set_dma_mask (see below) with 40bits returns an error.
+
+But it seems that the radeon driver is designed to have
+need_dma32 = true exactly if pci_set_dma_mask is called
+with 32 bits and false if it is called with 40 bits.
+
+I have read somewhere that the default are 32 bits. So if the
+call fails I suppose that need_dma32 should be set to true.
+
+And indeed the patch fixes the problem I have had before
+and which I had described here:
+http://choon.net/forum/read.php?21,106131,115940
+
+Acked-by: Alex Deucher <alexdeucher at gmail.com>
+Signed-off-by: Dave Airlie <airlied at redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh at suse.de>
+Signed-off-by: Stefan Bader <stefan.bader at canonical.com>
+---
+ drivers/gpu/drm/radeon/radeon_device.c |    1 +
+ 1 files changed, 1 insertions(+), 0 deletions(-)
+
+diff --git a/drivers/gpu/drm/radeon/radeon_device.c b/drivers/gpu/drm/radeon/radeon_device.c
+index ac47fd0..6a78b34 100644
+--- a/drivers/gpu/drm/radeon/radeon_device.c
++++ b/drivers/gpu/drm/radeon/radeon_device.c
+@@ -682,6 +682,7 @@ int radeon_device_init(struct radeon_device *rdev,
+ 	dma_bits = rdev->need_dma32 ? 32 : 40;
+ 	r = pci_set_dma_mask(rdev->pdev, DMA_BIT_MASK(dma_bits));
+ 	if (r) {
++		rdev->need_dma32 = true;
+ 		printk(KERN_WARNING "radeon: No suitable DMA available.\n");
+ 	}
+ 
+-- 
+1.7.2.5
+

Added: dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm_mm-extract-check_free_mm_node.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze/linux-2.6/debian/patches/bugfix/all/drm_mm-extract-check_free_mm_node.patch	Sat Aug  6 11:24:48 2011	(r17887)
@@ -0,0 +1,143 @@
+From: Daniel Vetter <daniel.vetter at ffwll.ch>
+Date: Fri, 17 Jun 2011 10:04:19 -0500
+Subject: [PATCH 01/10] drm_mm: extract check_free_mm_node
+
+commit d4a82251610c863bab7f457cb7a76a4bf01abb21 upstream.
+
+BugLink: http://bugs.launchpad.net/bugs/599017
+
+There are already two copies of this logic. And the new scanning
+stuff will add some more. So extract it into a small helper
+function.
+
+Signed-off-by: Daniel Vetter <daniel.vetter at ffwll.ch>
+Acked-by: Thomas Hellstrom <thellstrom at vmwgfx.com>
+Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
+Signed-off-by: Dave Airlie <airlied at redhat.com>
+(backported from commit 7a6b2896f261894dde287d3faefa4b432cddca53 upstream)
+
+Signed-off-by: Seth Forshee <seth.forshee at canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader at canonical.com>
+---
+ drivers/gpu/drm/drm_mm.c |   69 ++++++++++++++++++++++-----------------------
+ 1 files changed, 34 insertions(+), 35 deletions(-)
+
+diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c
+index 2ac074c8..4935e91 100644
+--- a/drivers/gpu/drm/drm_mm.c
++++ b/drivers/gpu/drm/drm_mm.c
+@@ -328,6 +328,27 @@ void drm_mm_put_block(struct drm_mm_node *cur)
+ 
+ EXPORT_SYMBOL(drm_mm_put_block);
+ 
++static int check_free_mm_node(struct drm_mm_node *entry, unsigned long size,
++			      unsigned alignment)
++{
++	unsigned wasted = 0;
++
++	if (entry->size < size)
++		return 0;
++
++	if (alignment) {
++		register unsigned tmp = entry->start % alignment;
++		if (tmp)
++			wasted = alignment - tmp;
++	}
++
++	if (entry->size >= size + wasted) {
++		return 1;
++	}
++
++	return 0;
++}
++
+ struct drm_mm_node *drm_mm_search_free(const struct drm_mm *mm,
+ 				       unsigned long size,
+ 				       unsigned alignment, int best_match)
+@@ -337,31 +358,22 @@ struct drm_mm_node *drm_mm_search_free(const struct drm_mm *mm,
+ 	struct drm_mm_node *entry;
+ 	struct drm_mm_node *best;
+ 	unsigned long best_size;
+-	unsigned wasted;
+ 
+ 	best = NULL;
+ 	best_size = ~0UL;
+ 
+ 	list_for_each(list, free_stack) {
+ 		entry = list_entry(list, struct drm_mm_node, fl_entry);
+-		wasted = 0;
+ 
+-		if (entry->size < size)
++		if (!check_free_mm_node(entry, size, alignment))
+ 			continue;
+ 
+-		if (alignment) {
+-			register unsigned tmp = entry->start % alignment;
+-			if (tmp)
+-				wasted += alignment - tmp;
+-		}
++		if (!best_match)
++			return entry;
+ 
+-		if (entry->size >= size + wasted) {
+-			if (!best_match)
+-				return entry;
+-			if (entry->size < best_size) {
+-				best = entry;
+-				best_size = entry->size;
+-			}
++		if (entry->size < best_size) {
++			best = entry;
++			best_size = entry->size;
+ 		}
+ 	}
+ 
+@@ -381,38 +393,25 @@ struct drm_mm_node *drm_mm_search_free_in_range(const struct drm_mm *mm,
+ 	struct drm_mm_node *entry;
+ 	struct drm_mm_node *best;
+ 	unsigned long best_size;
+-	unsigned wasted;
+ 
+ 	best = NULL;
+ 	best_size = ~0UL;
+ 
+ 	list_for_each(list, free_stack) {
+ 		entry = list_entry(list, struct drm_mm_node, fl_entry);
+-		wasted = 0;
+-
+-		if (entry->size < size)
+-			continue;
+ 
+ 		if (entry->start > end || (entry->start+entry->size) < start)
+ 			continue;
+ 
+-		if (entry->start < start)
+-			wasted += start - entry->start;
++		if (!check_free_mm_node(entry, size, alignment))
++			continue;
+ 
+-		if (alignment) {
+-			register unsigned tmp = (entry->start + wasted) % alignment;
+-			if (tmp)
+-				wasted += alignment - tmp;
+-		}
++		if (!best_match)
++			return entry;
+ 
+-		if (entry->size >= size + wasted &&
+-		    (entry->start + wasted + size) <= end) {
+-			if (!best_match)
+-				return entry;
+-			if (entry->size < best_size) {
+-				best = entry;
+-				best_size = entry->size;
+-			}
++		if (entry->size < best_size) {
++			best = entry;
++			best_size = entry->size;
+ 		}
+ 	}
+ 
+-- 
+1.7.2.5
+

Modified: dists/squeeze/linux-2.6/debian/patches/series/36
==============================================================================
--- dists/squeeze/linux-2.6/debian/patches/series/36	Fri Aug  5 17:56:38 2011	(r17886)
+++ dists/squeeze/linux-2.6/debian/patches/series/36	Sat Aug  6 11:24:48 2011	(r17887)
@@ -5,3 +5,14 @@
 - bugfix/all/fix-for-buffer-overflow-in-ldm_frag_add-not-sufficient.patch
 - bugfix/x86/x86-amd-do-not-enable-arat-feature-on-amd-processors-below.patch
 + bugfix/all/stable/2.6.32.42.patch
+
++ bugfix/all/drm_mm-extract-check_free_mm_node.patch
++ bugfix/all/drm-implement-helper-functions-for-scanning-lru-list.patch
++ bugfix/all/drm-i915-prepare-for-fair-lru-eviction.patch
++ bugfix/all/drm-i915-Move-the-eviction-logic-to-its-own-file.patch
++ bugfix/all/drm-i915-Implement-fair-lru-eviction-across-both-rin.patch
++ bugfix/all/drm-i915-Maintain-LRU-order-of-inactive-objects-upon.patch
++ bugfix/all/drm-i915-evict-Ensure-we-completely-cleanup-on-failu.patch
++ bugfix/all/drm-i915-Periodically-flush-the-active-lists-and-req.patch
++ bugfix/all/drm-i915-Add-a-no-lvds-quirk-for-the-Asus-EeeBox-PC-.patch
++ bugfix/all/drm-radeon-kms-fix-for-radeon-on-systems-4GB-without.patch



More information about the Kernel-svn-changes mailing list