From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on inbox.vuxu.org X-Spam-Level: X-Spam-Status: No, score=-3.1 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H4, RCVD_IN_MSPIKE_WL,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.4 Received: from second.openwall.net (second.openwall.net [193.110.157.125]) by inbox.vuxu.org (Postfix) with SMTP id 82C7629225 for ; Tue, 11 Jun 2024 16:09:35 +0200 (CEST) Received: (qmail 5395 invoked by uid 550); 11 Jun 2024 14:09:31 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: musl@lists.openwall.com Received: (qmail 5363 invoked from network); 11 Jun 2024 14:09:31 -0000 Date: Tue, 11 Jun 2024 16:09:22 +0200 From: Szabolcs Nagy To: Stefan Jumarea Cc: musl@lists.openwall.com, dalias@libc.org Message-ID: <20240611140922.GF3766212@port70.net> Mail-Followup-To: Stefan Jumarea , musl@lists.openwall.com, dalias@libc.org References: <20240610123624.305051-2-stefanjumarea02@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240610123624.305051-2-stefanjumarea02@gmail.com> Subject: Re: [musl] [PATCH] mallocng: Add MTE support for Aarch64 * Stefan Jumarea [2024-06-10 15:36:25 +0300]: > @@ -102,17 +107,30 @@ void free(void *p) > { > if (!p) return; > > +#ifdef MEMTAG > + void *untagged = (void *)((uint64_t)p & ~MTE_TAG_MASK); > +#else > + void *untagged = p; > +#endif > + > struct meta *g = get_meta(p); ... > static inline struct meta *get_meta(const unsigned char *p) > { > assert(!((uintptr_t)p & 15)); > - int offset = *(const uint16_t *)(p - 2); > - int index = get_slot_index(p); > - if (p[-4]) { > +#ifdef MEMTAG > + const unsigned char *untagged = (const unsigned char *)((uint64_t)p & ~MTE_TAG_MASK); > +#else > + const unsigned char *untagged = p; > +#endif if free just drops the tag, then incorrectly tagged pointer will not be detected. musl does some use-after-free checks, so i dont know how important this is, but i'd check that the passed pointer matches the memory tag (unless 0 sized and that the tag is non-0) otherwise a forged pointer may cause corruption. i don't see where you enable tagged pointer abi and checks (prctl) or where you add PROT_MTE to the mmapped memory. https://www.kernel.org/doc/html/latest/arch/arm64/memory-tagging-extension.html https://www.kernel.org/doc/html/latest/arch/arm64/tagged-address-abi.html (i think we want the synchronous tag check option.) note there is software that assumes page granularity for memory protection e.g. does read access at p&-4096, and there may software that assumes the top bits of a valid pointer is 0 so unconditionally enabling tagging and tag checks can be an issue. another potential issue is the early ALIGN_UP of the malloc size: this overflows malloc(-1) and i think changes malloc_usable_size(malloc(1)). iirc i changed IB when i tried out mte with mallocng. i would avoid excessive ifdefs in the code, e.g. by using 'p = untag(p);' and define untag appropriately in a header. (this might as well do the tag checks when mte is enabled, but might need special-casing 0 sized allocations.) tagging large areas can be slow, just like 'dc zva,x0' in aarch64 memset, there is 'dc gva,x0' that tags an entire cacheline, alternatively it may be better to just rely on page protection for large allocations to avoid tagging overheads (don't even mmap them as PROT_MTE). i've never actually benchmarked what makes sense in practice, i'd at least put the hand written loop behind an abstraction.