From mboxrd@z Thu Jan 1 00:00:00 1970 To: 9fans@cse.psu.edu From: markp@panix.com (mark powers) MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit Message-Id: <20030211213814.A4857988E3@mail3.panix.com> Subject: [9fans] little addition to hget: dump headers Date: Tue, 11 Feb 2003 17:36:05 -0500 Topicbox-Message-UUID: 57b1aee2-eacb-11e9-9e20-41e7f4b1d025 hi all, I just got around to installing html2ps, the perl script. my favorite feature of it so far is that it can crawl recursively down a tree of www pages and assemble them into a DSC conforming postscript file; but to do url retreival it needs to invoke a program that dumps http headers along with data. ergo, this silly little diff which gives hget an -h (print headers) option. diff $home/src/cmd/hget.c /sys/src/cmd/hget.c 64,65d63 < int headerprint; < 78c76 < fprint(2, "usage: %s [-v] [-h] [-o outfile] [-p body] [-x netmtpt] url\n", argv0); --- > fprint(2, "usage: %s [-v] [-o outfile] [-p body] [-x netmtpt] url\n", argv0); 101,103d98 < case 'h': < headerprint = 1; < break; 583,584d577 < if(headerprint) < fprint(1, "%s\n", buf); p.s. anyone managed to port netpbm to APE?