9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
From: "David Butler" <gdb@dbSystems.com>
To: <9fans@cse.psu.edu>
Subject: Re: [9fans] httpd scripting
Date: Mon,  2 Jun 2003 07:12:51 -0500	[thread overview]
Message-ID: <13b301c32900$56569700$644cb2cc@kds> (raw)
In-Reply-To: <3EDB3B44.8030805@proweb.co.uk>

I did a lot of work in the http area on Plan 9, back on
version 2 of the system. I took the approach that anything
is executable (compiled) instead of scripting. It was fast
and very flexible. Perhaps I can see what it takes to get
it on version 4 and make it available...

David

----- Original Message -----
From: "matt" <matt@proweb.co.uk>
To: <9fans@cse.psu.edu>
Sent: Monday, June 02, 2003 6:55 AM
Subject: Re: [9fans] httpd scripting


> Skip Tavakkolian wrote:
>
> >Has anyone done any deep thinking on this? I've just started
> >looking into it. I have looked at Pegasus, but I need something a
> >little more specialized.  My own preference is for an awk-like
environment
> >but nothing definite yet. I want to process code fragments inside
> >html docs, the same thing as jsp,asp,php, etc.
> >
>
> as someone who uses PHP (and previously ASP) professionally - don't 8)
>
> RC and friends is enough
>
> if want awk then use awk  8)
>
> as for cookies, headers etc. you just need a quick script to chuck the
> request into the environment (which is what PHP does)
>
> I did write a script to do it once
>
>
http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&oe=UTF-8&safe=off&threadm=015d01c22ed3%24fcc596e0%246501a8c0%40KIKE&rnum=1&prev=/groups%3Fq%3Dgroup:comp.os.plan9%2Bhttpd%2Bmatt%26hl%3Den%26lr%3D%26ie%3DUTF-8%26oe%3DUTF-8%26safe%3Doff%26selm%3D015d01c22ed3%2524fcc596e0%25246501a8c0%2540KIKE%26rnum%3D1
>
> but I seem to have lost the code during a re-install (curses for putting
> the magic directory outside of /usr )
>
> All you really need to do is split the headers on the first colon :
> ([^:]+): ?(.*)
>
> (after skipping past the first line)
> and then pop the results into the environment
>
> echo $match_2 > /env/$match_1
>
> when you run out of matches the rest of stdin is the body & you might
> even be able to trust the Content-Length value
>
>
>
>
>



  reply	other threads:[~2003-06-02 12:12 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2003-06-01  0:47 Skip Tavakkolian
2003-06-01 13:27 ` Kenji Arisawa
2003-06-01 14:24   ` Skip Tavakkolian
2003-06-01 15:28     ` boyd, rounin
2003-06-01 14:58       ` Skip Tavakkolian
2003-06-01 15:59         ` boyd, rounin
2003-06-01 20:56     ` Kenji Arisawa
2003-06-02 11:55 ` matt
2003-06-02 12:12   ` David Butler [this message]
2003-06-02 16:22     ` matt
2003-06-02 12:35   ` Charles Forsyth
2003-06-02 13:08     ` matt
2003-06-02 13:51       ` Charles Forsyth
2003-06-02 15:38       ` Ian Broster
2003-06-02 16:12         ` matt
     [not found]         ` <e38733ba634d64e975019d2c0de81b2b@proxima.alt.za>
2003-06-02 16:15           ` spam
2003-06-02 17:55         ` Joel Salomon
2003-06-03  9:42       ` Robby
2003-06-03 12:50         ` William Ahern

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='13b301c32900$56569700$644cb2cc@kds' \
    --to=gdb@dbsystems.com \
    --cc=9fans@cse.psu.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).