From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.zx2c4.com (lists.zx2c4.com [165.227.139.114]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 058D0CA0FE1 for ; Thu, 21 Aug 2025 23:17:06 +0000 (UTC) Received: by lists.zx2c4.com (ZX2C4 Mail Server) with ESMTP id 1b878c47; Thu, 21 Aug 2025 20:40:32 +0000 (UTC) Received: from mail-yb1-xb36.google.com (mail-yb1-xb36.google.com [2607:f8b0:4864:20::b36]) by lists.zx2c4.com (ZX2C4 Mail Server) with ESMTPS id 78322de4 (TLSv1.3:TLS_AES_256_GCM_SHA384:256:NO) for ; Thu, 21 Aug 2025 20:40:27 +0000 (UTC) Received: by mail-yb1-xb36.google.com with SMTP id 3f1490d57ef6-e932ded97easo1395868276.1 for ; Thu, 21 Aug 2025 13:40:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux-foundation.org; s=google; t=1755808826; x=1756413626; darn=lists.zx2c4.com; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=lHa4LrampbSXQx1Wyi9Bh9OjrywcR0PvG9lus2xsnNw=; b=Tb3kX8kbCYfOrNmb7hCdFdrAOPGIz07EzEv+VeUDBdnvUO2/pHY6PBczB+jRv4zJYC 0etTzeAljulP2WjWFVlx74d4pEzVymOozKG+kizXtGc3JHN+8zcSe2chWuh5m0Nd6KaM dxRkAw3R3wBuKfKRXg0hVyOWEo23p+gjnkQdQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1755808826; x=1756413626; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lHa4LrampbSXQx1Wyi9Bh9OjrywcR0PvG9lus2xsnNw=; b=ge1Qs8z+DSn8VdnYyMjZEl5tEndl3Qp5Ab59LyJ8mAoZNm6n4j494SSCaDofvz3I2H Bun+eGg5iQgLjag5Kuus1qk4fYj9p8NEC+HXD+y06j2UabB/8Pw71VYOs6CSXV8jlJB+ ktxHVkJe0BCty3oC5oACd5/JORtTBc4r6ikc7ycAIHckcPPa71Fswx/e6F0YFL2ci2ok Ll3NXKM5qNc656UaxqTlLLochJwOSaG7oy7ek2JPM762Xh5cn3ra5eC/llcQ9v0NLX0L b55oMCHCoyd9L+cCsaWcLkvE2tlrn/T3436aH6kZ7nDQoLhcUhGJ04ZZOqBsvurNN3Xp 0EVw== X-Forwarded-Encrypted: i=1; AJvYcCW1tFc2ZwYX2DWJXptDMyNnT5IHT+eE1HYk6AO/sWyjRxOohraSU+/Fybf29vOJVdAMyOKlVPM/xrc=@lists.zx2c4.com X-Gm-Message-State: AOJu0YzTZ0Qb3AEjhzckbdTNrVIE4him8r0V3fPBdg8d5hafaTVcUJfO wJFyM42YwW22OFl9yuQh9y8jMsTRA4ecwhIzcipXLvU6sAZkJ/P4yfoAcDT89xN12nHRZR6o9DW YLqxGFSiNEWHyb/wyz288gi0VgBTfT+CET8/7BVrmsQ== X-Gm-Gg: ASbGncv1t1FfVzFUhjA2tUj7nBlkRHgqU732rXWzbcqBhq2tdDnpvyDIQMABkvm7ba1 2pmzrJ2+Zl/6PwM/Ut8qTnpEfN6di0NHXIMW5nfRE3yowBcYYb9HfYDeznoorygXVhCpgR9kex/ N/KFghp2C5HNVa5LswBBC7jZOHJXgXOcipl4FfVEoGufYEn6WLFDvr9PkeGzFnpJhXvXvr9UkKJ 0NkyAHJ7SMGrKPm X-Google-Smtp-Source: AGHT+IEDbX/Imr3oMuC/SvPpVgA4gK54+Q+BCEwHjysz7tgNECdpdXbU462KLrYXAP4tYmTYkQFLOtkp14xdKE7/w+U= X-Received: by 2002:a05:6902:c12:b0:e93:457a:37b0 with SMTP id 3f1490d57ef6-e951c33ee1bmr998901276.20.1755808826442; Thu, 21 Aug 2025 13:40:26 -0700 (PDT) MIME-Version: 1.0 References: <20250821200701.1329277-1-david@redhat.com> <20250821200701.1329277-32-david@redhat.com> <2926d7d9-b44e-40c0-b05d-8c42e99c511d@redhat.com> In-Reply-To: <2926d7d9-b44e-40c0-b05d-8c42e99c511d@redhat.com> From: Linus Torvalds Date: Thu, 21 Aug 2025 16:40:13 -0400 X-Gm-Features: Ac12FXw_AatpwCNNPCEiMwiwdQxQbayKDzVf5K7yc3iQ5tLY7APrv3zl2U8Z_SA Message-ID: Subject: Re: [PATCH RFC 31/35] crypto: remove nth_page() usage within SG entry To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, Herbert Xu , "David S. Miller" , Alexander Potapenko , Andrew Morton , Brendan Jackman , Christoph Lameter , Dennis Zhou , Dmitry Vyukov , dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, iommu@lists.linux.dev, io-uring@vger.kernel.org, Jason Gunthorpe , Jens Axboe , Johannes Weiner , John Hubbard , kasan-dev@googlegroups.com, kvm@vger.kernel.org, "Liam R. Howlett" , linux-arm-kernel@axis.com, linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org, linux-ide@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mips@vger.kernel.org, linux-mmc@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-scsi@vger.kernel.org, Lorenzo Stoakes , Marco Elver , Marek Szyprowski , Michal Hocko , Mike Rapoport , Muchun Song , netdev@vger.kernel.org, Oscar Salvador , Peter Xu , Robin Murphy , Suren Baghdasaryan , Tejun Heo , virtualization@lists.linux.dev, Vlastimil Babka , wireguard@lists.zx2c4.com, x86@kernel.org, Zi Yan Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: wireguard@lists.zx2c4.com X-Mailman-Version: 2.1.30rc1 Precedence: list List-Id: Development discussion of WireGuard List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: wireguard-bounces@lists.zx2c4.com Sender: "WireGuard" On Thu, Aug 21, 2025 at 4:29=E2=80=AFPM David Hildenbrand wrote: > > Because doing a 64-bit shift on x86-32 is like three cycles. Doing a > > 64-bit signed division by a simple constant is something like ten > > strange instructions even if the end result is only 32-bit. > > I would have thought that the compiler is smart enough to optimize that? > PAGE_SIZE is a constant. Oh, the compiler optimizes things. But dividing a 64-bit signed value with a constant is still quite complicated. It doesn't generate a 'div' instruction, but it generates something like th= is: movl %ebx, %edx sarl $31, %edx movl %edx, %eax xorl %edx, %edx andl $4095, %eax addl %ecx, %eax adcl %ebx, %edx and that's certainly a lot faster than an actual 64-bit divide would be. An unsigned divide - or a shift - results in just shrdl $12, %ecx, %eax which is still not the fastest instruction (I think shrld gets split into two uops), but it's certainly simpler and easier to read. Linus