From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <9front-bounces@9front.inri.net> X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on inbox.vuxu.org X-Spam-Level: X-Spam-Status: No, score=-0.6 required=5.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI autolearn=ham autolearn_force=no version=3.4.4 Received: from 9front.inri.net (9front.inri.net [168.235.81.73]) by inbox.vuxu.org (Postfix) with ESMTP id DE95020637 for ; Wed, 8 May 2024 18:13:29 +0200 (CEST) Received: from mail.posixcafe.org ([45.76.19.58]) by 9front; Wed May 8 12:10:03 -0400 2024 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=posixcafe.org; s=20200506; t=1715184592; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FKfGvkw94pOunT+HFH08jrn6J8x1da7wiRwtgsSHDHU=; b=pv2/Sgj2Qw06Fhze+OX2DpCaP+KENvVYq5QXsdQBUgCdJOjbz1mDIpt3gQvPlZ4tWRuTOe UYoFt9ybEQAB9xYBM87UdKXZ8uMcWydfSipyF0yGSIdGHNIPW2njRWRD1xBOYIaYOuR7Um Dh0E6R6NMrThFXOOf+V2wCo9rj1X7Qs= Received: from [192.168.168.200] ( [207.45.82.38]) by mail.posixcafe.org (OpenSMTPD) with ESMTPSA id 77b4bc64 (TLSv1.3:TLS_AES_256_GCM_SHA384:256:NO) for <9front@9front.org>; Wed, 8 May 2024 11:09:52 -0500 (CDT) Message-ID: <9df183e7-7a94-4d58-9a68-2dbc0e73018f@posixcafe.org> Date: Wed, 8 May 2024 11:10:00 -0500 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird To: 9front@9front.org References: <328dc131-044d-408d-a040-512a44ae6e7b@fjrhome.net> Content-Language: en-US From: Jacob Moody In-Reply-To: <328dc131-044d-408d-a040-512a44ae6e7b@fjrhome.net> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit List-ID: <9front.9front.org> List-Help: X-Glyph: ➈ X-Bullshit: open optimized ACPI over SOAP self-signing hardware optimizer Subject: Re: [9front] Enabling a service Reply-To: 9front@9front.org Precedence: bulk On 5/8/24 10:49, Frank D. Engel, Jr. wrote: > How did it work for Venti when there were multiple users? My understanding of venti is somewhat limited to take my explanation with a grain of salt. If you have divergent fossils using the same venti you will get divergent root scores, they become two paths that have to be merged manually. > > You still have a single source of truth on the file server as all of the > data would still be written there with this approach and it would still > manage the directory structure, so any data that would come from other > users would ultimately just be pulled from the file server and loaded > into their cache separately. I got your proposal the wrong way around, you are talking about a local venti and a remote fossil. At the point you decide that you want a single remote source of truth(filesystem) that everyone must reconcile with you are still going to have issues with merging. Venti works as a backing for multiple disjoint fossils because it has no single source of truth for the filesystem, it just stores blocks. No matter how you cut you are going to have to deal with multiple people merging their cache in to the single root of truth with potentially latent updates. Either you have to serialize all mutations at the source of truth (and at that point you are latent bound) or you have to be clever about merging. A lot of ink has been spilled about this problem in the scope of web programming (CRDs iirc), perhaps that may serve as some inspiration. > > One challenge might seem to be simultaneous writes to different parts of > the same block, but in this case the locally calculated hash for the > block that was written to cache would (hopefully) not match the one > calculated by the file server which would reflect both updates, so when > the read would occur the file server would send a hash that would not be > in the local cache and the read would be sent across to the file server > for the updated block, with the incorrect local block eventually being > aged out of the cache as it started to fill up. > > Reading this and rereading your previous email I still do not fully understand how you plan to deal with collisions. If you have a local cache that you write to first and then you slowly drain that to the remote system you are still going to have merge issues. The fileserver is going to say at some point "No this is based on stale information" and you'll have to figure out out to retroactively reconcile this error with a system that has already forgotten this request.