From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.zx2c4.com (lists.zx2c4.com [165.227.139.114]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8EB6EC27C4F for ; Thu, 13 Jun 2024 12:23:01 +0000 (UTC) Received: by lists.zx2c4.com (ZX2C4 Mail Server) with ESMTP id 31027f93; Thu, 13 Jun 2024 12:22:59 +0000 (UTC) Received: from sin.source.kernel.org (sin.source.kernel.org [2604:1380:40e1:4800::1]) by lists.zx2c4.com (ZX2C4 Mail Server) with ESMTPS id e985aeeb (TLSv1.3:TLS_AES_256_GCM_SHA384:256:NO) for ; Thu, 13 Jun 2024 12:22:56 +0000 (UTC) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 37138CE2618; Thu, 13 Jun 2024 12:22:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 46929C32786; Thu, 13 Jun 2024 12:22:49 +0000 (UTC) Authentication-Results: smtp.kernel.org; dkim=pass (1024-bit key) header.d=zx2c4.com header.i=@zx2c4.com header.b="hfCrzpDL" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zx2c4.com; s=20210105; t=1718281367; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=zijv6FPIt5hCtiuqrLBHgYuKfY1FOvo07a77dKOwqRA=; b=hfCrzpDLzmbNtF/sSyUwD/q4E2PbjKj9Fna9CH4V+V4vPbeuYBlZyE1WHIwBmUUgtmwa4e hkAMHeSicVC3lwYY/+b0ttk/c79NH8Y9GniehbCUO4Tz+G6YoeBDfmgGgVjY7xbdbEJGet Tjg4BjizrxilahviyNABzJbhoTcy7oY= Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id 199f6b88 (TLSv1.3:TLS_AES_256_GCM_SHA384:256:NO); Thu, 13 Jun 2024 12:22:46 +0000 (UTC) Date: Thu, 13 Jun 2024 14:22:41 +0200 From: "Jason A. Donenfeld" To: "Paul E. McKenney" Cc: Jakub Kicinski , Julia Lawall , linux-block@vger.kernel.org, kernel-janitors@vger.kernel.org, bridge@lists.linux.dev, linux-trace-kernel@vger.kernel.org, Mathieu Desnoyers , kvm@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, "Naveen N. Rao" , Christophe Leroy , Nicholas Piggin , netdev@vger.kernel.org, wireguard@lists.zx2c4.com, linux-kernel@vger.kernel.org, ecryptfs@vger.kernel.org, Neil Brown , Olga Kornievskaia , Dai Ngo , Tom Talpey , linux-nfs@vger.kernel.org, linux-can@vger.kernel.org, Lai Jiangshan , netfilter-devel@vger.kernel.org, coreteam@netfilter.org, Vlastimil Babka Subject: Re: [PATCH 00/14] replace call_rcu by kfree_rcu for simple kmem_cache_free callback Message-ID: References: <20240609082726.32742-1-Julia.Lawall@inria.fr> <20240612143305.451abf58@kernel.org> <08ee7eb2-8d08-4f1f-9c46-495a544b8c0e@paulmck-laptop> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <08ee7eb2-8d08-4f1f-9c46-495a544b8c0e@paulmck-laptop> X-BeenThere: wireguard@lists.zx2c4.com X-Mailman-Version: 2.1.30rc1 Precedence: list List-Id: Development discussion of WireGuard List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: wireguard-bounces@lists.zx2c4.com Sender: "WireGuard" On Wed, Jun 12, 2024 at 08:38:02PM -0700, Paul E. McKenney wrote: > o Make the current kmem_cache_destroy() asynchronously wait for > all memory to be returned, then complete the destruction. > (This gets rid of a valuable debugging technique because > in normal use, it is a bug to attempt to destroy a kmem_cache > that has objects still allocated.) > > o Make a kmem_cache_destroy_rcu() that asynchronously waits for > all memory to be returned, then completes the destruction. > (This raises the question of what to is it takes a "long time" > for the objects to be freed.) These seem like the best two options. > o Make a kmem_cache_free_barrier() that blocks until all > objects in the specified kmem_cache have been freed. > > o Make a kmem_cache_destroy_wait() that waits for all memory to > be returned, then does the destruction. This is equivalent to: > > kmem_cache_free_barrier(&mycache); > kmem_cache_destroy(&mycache); These also seem fine, but I'm less keen about blocking behavior. Though, along the ideas of kmem_cache_destroy_rcu(), you might also consider renaming this last one to kmem_cache_destroy_rcu_wait/barrier(). This way, it's RCU focused, and you can deal directly with the question of, "how long is too long to block/to memleak?" Specifically what I mean is that we can still claim a memory leak has occurred if one batched kfree_rcu freeing grace period has elapsed since the last call to kmem_cache_destroy_rcu_wait/barrier() or kmem_cache_destroy_rcu(). In that case, you quit blocking, or you quit asynchronously waiting, and then you splat about a memleak like we have now. But then, if that mechanism generally works, we don't really need a new function and we can just go with the first option of making kmem_cache_destroy() asynchronously wait. It'll wait, as you described, but then we adjust the tail of every kfree_rcu batch freeing cycle to check if there are _still_ any old outstanding kmem_cache_destroy() requests. If so, then we can splat and keep the old debugging info we currently have for finding memleaks. Jason