From: <plan9@itic.ca>
To: <9fans@cse.psu.edu>
Subject: Re: [9fans] DHCP
Date: Wed, 19 Jun 2002 14:45:26 +0200 [thread overview]
Message-ID: <011f01c21790$e424c660$07cf2bd4@335400> (raw)
In-Reply-To: <b770f7909e4434ce0e856a9f72475ff9@plan9.bell-labs.com>
----- Original Message -----
From: <presotto@plan9.bell-labs.com>
> I'm about to replace the current file system based state sharing in
> dhcpd with the one from the internet draft:
>
> "DHCP Failover Protocol", Ralph Droms, Bernard Volz, K. Kinnear, Arun
> Kapur, Mark Stapp, Greg Rabil, Mike Dooley, Steve Gonczi, 01/24/2002,
> <draft-ietf-dhc-failover-10.txt>
>
> The current dhcpd addresses the case of a single cpu falling over, but
> has a single point of failure if we lose the shared file server all
> the servers are running from.
>
We have considered those solutions :
Running dhcpd on the file server itself [does not resolve a single
point of failure situation]
Running dhcpd on a special pccpudisk [the kfs has /lib/ndb/... and it
serves security files too]
The last one was ok, provided this CPU and its backups do not reboot upon fs
connectivity failure...
> The IETF draft separates the address space so that each server only serves
> from its own pool of addresses but updates the others' state through
> a special protocol so that each can take over should its partner fail.
> I'm a bit bothered that it only supports 2 servers but maybe I'm
> being silly.
>
In normal IP world, 2 dhcpd instances are normal. In a Plan 9 architecture
it could be interesting to get *any* CPU offering dhcp if they all share the
same coherent adress space and serve only when needed. The problem becomes
the way to `share' coherent state informations over the network. The closest
and *simplest* protocol could be something like a [L2 only] Cisco Discovery
Protocol that also propagates node functionnality informations. The Plan 9
Discovery Protocol could implement a service state message semaphore that
serves informations in sothing � la /dev/env I see this more in terms of
conventions, since protocol versioning and new services are not predictable.
> Another possible solution would be to use the current dhcpd but run it
> on top of either
> (1) a very reliable file server (venti?)
> or
I am not sure it is a good idea to use an essential and booting phase
service on the top of something complex relying on other services within an
architecture...
> (2) a replicated file service that runs on all the servers' machines.
>
Hum... Do you meen something close to virtual shares onto CIFS ? Interesting
but probably much more complex to implement.
> I don't really have (1) though venti might get there.
> I know how to do (2) but then I'ld have to handle
> inconsistencies and the solution looks like it'll
> get so dhcp dependent that I should just go with the
> draft standard.
>
> Any comments? Someone already done it or something better?
> I'm going to start on the draft version but I have nothing
> against stopping or changing it later.
>
next prev parent reply other threads:[~2002-06-19 12:45 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2002-06-16 22:19 presotto
2002-06-19 12:45 ` plan9 [this message]
2002-06-19 9:48 ` Sam Hopkins
2002-06-16 22:29 ynl
2002-06-19 13:53 presotto
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='011f01c21790$e424c660$07cf2bd4@335400' \
--to=plan9@itic.ca \
--cc=9fans@cse.psu.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).