From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.zx2c4.com (lists.zx2c4.com [165.227.139.114]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1B6ADC8303F for ; Thu, 28 Aug 2025 09:04:27 +0000 (UTC) Received: by lists.zx2c4.com (ZX2C4 Mail Server) with ESMTP id dc68a6a8; Thu, 28 Aug 2025 08:43:57 +0000 (UTC) Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [2607:f8b0:4864:20::62e]) by lists.zx2c4.com (ZX2C4 Mail Server) with ESMTPS id a0178217 (TLSv1.3:TLS_AES_256_GCM_SHA384:256:NO) for ; Thu, 28 Aug 2025 08:43:54 +0000 (UTC) Received: by mail-pl1-x62e.google.com with SMTP id d9443c01a7336-248a61a27acso5575285ad.1 for ; Thu, 28 Aug 2025 01:43:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1756370633; x=1756975433; darn=lists.zx2c4.com; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=jLT2osl3N94HJBBJWKPAk2avCo16Lo88yw1WJjz/wsM=; b=O11XmHmv48tmHqzsOV8Dd2M6MpT+129z+oWmi+dsf2CxpFRO29Pr+YM2L5TXoAQYmb udTt8DBX8fZhEEo8bBVyC0vhBQL4CfYodc/2kcGOpr9zZZ4uzlglr2VwWjGNVSrif8c3 i2SgzX8a6fwQmijcfLH6P73ej+KuX69Z7rVOi6AkdENdljNy7rpcv+fpHfZm6rDuxxTP 7j4XZL+LdHcKDsPIkwlwrVINAmoJKlAYzAg+ePUNUJ93BAA02MB3FbXL3SYBGBXvmi21 tCC3LHA8HlSj8gligtLgWteV3LXGHhdn9PSXlFRTuQIaAXqpdiuG1eI6/o4iTzd5fmEJ p5Ew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1756370633; x=1756975433; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=jLT2osl3N94HJBBJWKPAk2avCo16Lo88yw1WJjz/wsM=; b=d03VQvkSLIuJIfMbxfzJB/1+ICvB8AnSUML0x7MRGMrhlRkscWYHVoKXetiUHeYWgM Vb3Zkd1lClWYZMSlbDxaQn0uDr/erJkMkzP8xHM4N/G1VA+K4VOIUtyYpOj4S9UW2sgV 6UB0OFYujDivQKoakQdkb0ATVZ6oh1AwTms6EWUSg60RWq9uGflJ8VZ69DAkFQjYtrv3 cBkv4bydZBbagzZmb4u7mR/aewvSkbn8DfBaOhYQwNMUMjJk9R836ZlLHwBIQhu3zpAa HgaU9ypWXCcV4CoanWMfU4fcz3Nq3lziRVAdgwD5RKGPu3ksDjhWwO/+0aF7aTqSs6jO A1lw== X-Forwarded-Encrypted: i=1; AJvYcCXNvMq9Pjmprm+CXp0v2NlmkVe/M97M8OvMpprLOkwpy4pF9eZJZWJmhcS+nAgrmgHU4+HRIYd82Z0=@lists.zx2c4.com X-Gm-Message-State: AOJu0YzJMnmXfXy2UQjq50OVGb9F65HYhyc+zZ6xGhXJnBRHUnvthyNC lrNkKukDaDjol+q5AXug46YMiW2rMTsP79lUt2/gSXfvGJu3FrYpFjzYGrWAGu5oZabsYvqbfwE I0fd5AgIJJGb+ejs5IARIUIep1gQODXTxgxARIYpr X-Gm-Gg: ASbGncvWsL11PzOqn+VZ3EkxHcT50xLGIPs+iJBeo9tXNARy+9djDbP5cDNtIp984OY 3owRAQ5IIl2EmNvF91nTaBRfnlPdMwYMu494hrrBfQrH6M0ipmQvutZuvTKPCWcmJoz/noua7kJ uV94I5jS21UeYO84e7uXzvaMSz/ORvLFs7Izv3y5BRb4q1+sV8GyWEHa+UWOrtz9TN2O77HliD4 G/KHHyT9VHK6mpBwNxcZqednpY= X-Google-Smtp-Source: AGHT+IGJmjz+YHPyNxr90AMhiQNNDQu9deq8PWwNfyq+zFWi/YcgUyPYkmS3VX3aACMgtkUmoKeOvBcyBRDmMb3NDeE= X-Received: by 2002:a17:903:3d06:b0:248:8063:a8b4 with SMTP id d9443c01a7336-2488063abcbmr89508125ad.22.1756370632768; Thu, 28 Aug 2025 01:43:52 -0700 (PDT) MIME-Version: 1.0 References: <20250827220141.262669-1-david@redhat.com> <20250827220141.262669-35-david@redhat.com> In-Reply-To: <20250827220141.262669-35-david@redhat.com> From: Marco Elver Date: Thu, 28 Aug 2025 10:43:16 +0200 X-Gm-Features: Ac12FXwMzUnIHp_v7uH0kV3Hu6ram9vqgPmCMZ3TyuNNAlhDfe6K8rTgx1FpO8k Message-ID: Subject: Re: [PATCH v1 34/36] kfence: drop nth_page() usage To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, Alexander Potapenko , Dmitry Vyukov , Andrew Morton , Brendan Jackman , Christoph Lameter , Dennis Zhou , dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, iommu@lists.linux.dev, io-uring@vger.kernel.org, Jason Gunthorpe , Jens Axboe , Johannes Weiner , John Hubbard , kasan-dev@googlegroups.com, kvm@vger.kernel.org, "Liam R. Howlett" , Linus Torvalds , linux-arm-kernel@axis.com, linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org, linux-ide@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mips@vger.kernel.org, linux-mmc@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-scsi@vger.kernel.org, Lorenzo Stoakes , Marek Szyprowski , Michal Hocko , Mike Rapoport , Muchun Song , netdev@vger.kernel.org, Oscar Salvador , Peter Xu , Robin Murphy , Suren Baghdasaryan , Tejun Heo , virtualization@lists.linux.dev, Vlastimil Babka , wireguard@lists.zx2c4.com, x86@kernel.org, Zi Yan Content-Type: text/plain; charset="UTF-8" X-BeenThere: wireguard@lists.zx2c4.com X-Mailman-Version: 2.1.30rc1 Precedence: list List-Id: Development discussion of WireGuard List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: wireguard-bounces@lists.zx2c4.com Sender: "WireGuard" On Thu, 28 Aug 2025 at 00:11, 'David Hildenbrand' via kasan-dev wrote: > > We want to get rid of nth_page(), and kfence init code is the last user. > > Unfortunately, we might actually walk a PFN range where the pages are > not contiguous, because we might be allocating an area from memblock > that could span memory sections in problematic kernel configs (SPARSEMEM > without SPARSEMEM_VMEMMAP). > > We could check whether the page range is contiguous > using page_range_contiguous() and failing kfence init, or making kfence > incompatible these problemtic kernel configs. > > Let's keep it simple and simply use pfn_to_page() by iterating PFNs. > > Cc: Alexander Potapenko > Cc: Marco Elver > Cc: Dmitry Vyukov > Signed-off-by: David Hildenbrand Reviewed-by: Marco Elver Thanks. > --- > mm/kfence/core.c | 12 +++++++----- > 1 file changed, 7 insertions(+), 5 deletions(-) > > diff --git a/mm/kfence/core.c b/mm/kfence/core.c > index 0ed3be100963a..727c20c94ac59 100644 > --- a/mm/kfence/core.c > +++ b/mm/kfence/core.c > @@ -594,15 +594,14 @@ static void rcu_guarded_free(struct rcu_head *h) > */ > static unsigned long kfence_init_pool(void) > { > - unsigned long addr; > - struct page *pages; > + unsigned long addr, start_pfn; > int i; > > if (!arch_kfence_init_pool()) > return (unsigned long)__kfence_pool; > > addr = (unsigned long)__kfence_pool; > - pages = virt_to_page(__kfence_pool); > + start_pfn = PHYS_PFN(virt_to_phys(__kfence_pool)); > > /* > * Set up object pages: they must have PGTY_slab set to avoid freeing > @@ -613,11 +612,12 @@ static unsigned long kfence_init_pool(void) > * enters __slab_free() slow-path. > */ > for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) { > - struct slab *slab = page_slab(nth_page(pages, i)); > + struct slab *slab; > > if (!i || (i % 2)) > continue; > > + slab = page_slab(pfn_to_page(start_pfn + i)); > __folio_set_slab(slab_folio(slab)); > #ifdef CONFIG_MEMCG > slab->obj_exts = (unsigned long)&kfence_metadata_init[i / 2 - 1].obj_exts | > @@ -665,10 +665,12 @@ static unsigned long kfence_init_pool(void) > > reset_slab: > for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) { > - struct slab *slab = page_slab(nth_page(pages, i)); > + struct slab *slab; > > if (!i || (i % 2)) > continue; > + > + slab = page_slab(pfn_to_page(start_pfn + i)); > #ifdef CONFIG_MEMCG > slab->obj_exts = 0; > #endif > -- > 2.50.1 > > -- > You received this message because you are subscribed to the Google Groups "kasan-dev" group. > To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com. > To view this discussion visit https://groups.google.com/d/msgid/kasan-dev/20250827220141.262669-35-david%40redhat.com.