From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from feline.systems ([99.23.220.214]) by ewsd; Mon Oct 15 05:36:15 EDT 2018 Date: Mon, 15 Oct 2018 09:36:01 +0000 From: BurnZeZ@feline.systems To: 9front@9front.org Subject: hget HEAD change 9fronthg:4a780de821ac Message-ID: MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit List-ID: <9front.9front.org> List-Help: X-Glyph: ➈ X-Bullshit: map/reduce firewall template-based standard lifecycle controller This change should be a good thing, but in the non-standards compliant real world, many web servers return errors on HEAD requests, or even a different contentlength than when sending GET. I think the real problem with hget is that we may rely on the exit string in order to determine if the file download was completed. I wrote my own hget alternative when experimenting with this. term% fhget -h usage: fhget [-H header] [-Rtu] file url By making the output file path explicit, I am able to do, f=$1 tf=$f.part If $f already exists, fhget exits with an error. Otherwise, all it reads from webfs is written to $tf until completion. Knowing the value of contentlength, if there is some failure but the size of $tf still ended up matching contentlength (for example, the entire file was received, but webfs propogates a read error from /net/tcp/n/data due to packet loss), it can ignore the read error and perform 'mv $tf $f'. In this way, $f never exists unless the entire file is believed to have been obtained. If contentlength is unknown, $tf is moved to $f only when the GET request is performed without error. If there is an error, $tf is left as is. If we run fhget with the same args, fhget will request the entire range starting from whatever the size of $tf is. I’ve made use of this method since January 2016, and it seems to work, but I may have overlooked something.