From: Paul Jackson Date: Thu, 7 Dec 2006 04:31:38 +0000 (-0800) Subject: [PATCH] memory page alloc minor cleanups X-Git-Tag: v2.6.20-rc1~145^2^2~358 X-Git-Url: http://pilppa.com/gitweb/?a=commitdiff_plain;h=0798e5193cd70f6c867ec176d7730589f944c627;p=linux-2.6-omap-h63xx.git [PATCH] memory page alloc minor cleanups - s/freeliest/freelist/ spelling fix - Check for NULL *z zone seems useless - even if it could happen, so what? Perhaps we should have a check later on if we are faced with an allocation request that is not allowed to fail - shouldn't that be a serious kernel error, passing an empty zonelist with a mandate to not fail? - Initializing 'z' to zonelist->zones can wait until after the first get_page_from_freelist() fails; we only use 'z' in the wakeup_kswapd() loop, so let's initialize 'z' there, in a 'for' loop. Seems clearer. - Remove superfluous braces around a break - Fix a couple errant spaces - Adjust indentation on the cpuset_zone_allowed() check, to match the lines just before it -- seems easier to read in this case. - Add another set of braces to the zone_watermark_ok logic From: Paul Jackson Backout one item from a previous "memory page_alloc minor cleanups" patch. Until and unless we are certain that no one can ever pass an empty zonelist to __alloc_pages(), this check for an empty zonelist (or some BUG equivalent) is essential. The code in get_page_from_freelist() blow ups if passed an empty zonelist. Signed-off-by: Paul Jackson Acked-by: Christoph Lameter Cc: Nick Piggin Signed-off-by: Paul Jackson Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- diff --git a/mm/page_alloc.c b/mm/page_alloc.c index aa6fcc7ca66..08360aa111f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -486,7 +486,7 @@ static void free_one_page(struct zone *zone, struct page *page, int order) spin_lock(&zone->lock); zone->all_unreclaimable = 0; zone->pages_scanned = 0; - __free_one_page(page, zone ,order); + __free_one_page(page, zone, order); spin_unlock(&zone->lock); } @@ -926,7 +926,7 @@ int zone_watermark_ok(struct zone *z, int order, unsigned long mark, } /* - * get_page_from_freeliest goes through the zonelist trying to allocate + * get_page_from_freelist goes through the zonelist trying to allocate * a page. */ static struct page * @@ -948,8 +948,8 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, zone->zone_pgdat != zonelist->zones[0]->zone_pgdat)) break; if ((alloc_flags & ALLOC_CPUSET) && - !cpuset_zone_allowed(zone, gfp_mask)) - continue; + !cpuset_zone_allowed(zone, gfp_mask)) + continue; if (!(alloc_flags & ALLOC_NO_WATERMARKS)) { unsigned long mark; @@ -959,17 +959,18 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, mark = zone->pages_low; else mark = zone->pages_high; - if (!zone_watermark_ok(zone , order, mark, - classzone_idx, alloc_flags)) + if (!zone_watermark_ok(zone, order, mark, + classzone_idx, alloc_flags)) { if (!zone_reclaim_mode || !zone_reclaim(zone, gfp_mask, order)) continue; + } } page = buffered_rmqueue(zonelist, zone, order, gfp_mask); - if (page) { + if (page) break; - } + } while (*(++z) != NULL); return page; } @@ -1005,9 +1006,8 @@ restart: if (page) goto got_pg; - do { + for (z = zonelist->zones; *z; z++) wakeup_kswapd(*z, order); - } while (*(++z)); /* * OK, we're below the kswapd watermark and have kicked background