From mboxrd@z Thu Jan 1 00:00:00 1970 Message-ID: <05640016b2f7b677b54134d515e69f0f@felloff.net> Date: Thu, 14 May 2015 13:50:33 +0200 From: cinap_lenrek@felloff.net To: 9fans@9fans.net In-Reply-To: <3008924.em7JGlQAbN@krypton> MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit Subject: Re: [9fans] Ports tree for Plan 9 Topicbox-Message-UUID: 52d67390-ead9-11e9-9d60-3106f5b1d025 found it. the server sends Content-Encoding header which causes hget to add a decompression filter, so you get as output a tarball. <- Content-Type: application/x-gzip <- Content-Encoding: gzip from the w3c: The Content-Encoding entity-header field is used as a modifier to the media-type. When presented, its value indicates what additional content codings have been applied to the entity-body, and thus what decoding mechanisms must be applied in order to obtail the media-type referenced by the Conent-Type header field. Content-Encoding is primarily used to allow a document to be compressed without losing the identity of its underlying media type. this is clearly silly, as the file is already compressed, and decompressing it will not yield the indicated content-type: application/x-gzip, but a tarball. maybe the w3c is wrong, or is ignored in practice or we need to handle gzip specially. the problem is that some webservers compress the data, like you request a html file and it gives you gzip back, thats why hget uncompresses. -- cinap