The Unix Heritage Society mailing list
 help / color / mirror / Atom feed
From: Dan Cross <crossd@gmail.com>
To: "Theodore Ts'o" <tytso@mit.edu>
Cc: TUHS main list <tuhs@minnie.tuhs.org>,
	Douglas McIlroy <douglas.mcilroy@dartmouth.edu>
Subject: Re: [TUHS] ATC/OSDI'21 joint keynote: It's Time for Operating Systems to Rediscover Hardware (Timothy Roscoe)
Date: Thu, 16 Sep 2021 15:27:17 -0400	[thread overview]
Message-ID: <CAEoi9W6AeNhaNWRXgYuPzfWAfc8btatMp=Wd=mYOscEyQK7Rng@mail.gmail.com> (raw)
In-Reply-To: <YTIhbyOXDNuGb/Ov@mit.edu>

[-- Attachment #1: Type: text/plain, Size: 10908 bytes --]

On Fri, Sep 3, 2021 at 9:23 AM Theodore Ts'o <tytso@mit.edu> wrote:

> On Thu, Sep 02, 2021 at 11:24:37PM -0400, Douglas McIlroy wrote:
> > I set out to write a reply, then found that Marshall had said it all,
> > better..Alas, the crucial central principle of Plan 9 got ignored, while
> > its ancillary contributions were absorbed into Linux, making Linux fatter
> > but still oriented to a bygone milieu.
>
> I'm really not convinced trying to build distributed computing into
> the OS ala Plan 9 is viable.


It seems like plan9 itself is an existence proof that this is possible.
What it did not present was an existence proof of its scalability and it
wasn't successful commercially. It probably bears mentioning that that
wasn't really the point of plan9, though; it was a research system.

I'll try to address the plan9 specific bits below.

The moment the OS has to span multiple
> TCB's (Trusted Computing Bases), you have to make some very
> opinionated decisions on a number of issues for which we do not have
> consensus after decades of trial and error:
>

Interestingly, plan9 did make opinionated decisions about all of the things
mentioned below. Largely, those decisions worked out pretty well.

   * What kind of directory service do you use?  x.500/LDAP?   Yellow Pages?
>        Project Athena's Hesiod?
>

In the plan9 world, the directory services were provided by the filesystem
and a user-level library that consumed databases of information resident in
the filesystem itself (ndb(6) -- https://9p.io/magic/man2html/6/ndb). It
also provided DNS services for interacting with the larger world. The
connection server was an idea that was ahead of it's time (ndb(8) --
https://9p.io/magic/man2html/8/ndb). Also,
https://9p.io/sys/doc/net/net.html.

   * What kind of distributed authentication do you use?  Kerboers?
>        Trust on first use authentication ala ssh?  .rhosts style
>        "trust the network" style authentication?
>

Plan 9 specifically used a Kerberos-like model, but did not use the
Kerberos protocol. I say "Kerberos-like" in that there was a trusted agent
on the network that provided authentication services using a protocol based
on shared secrets.

   * What kind of distributed authorization service do you use?   Unix-style
>        numeric user-id/group-id's?   X.500 Distinguished Names in ACL's?
>        Windows-style Security ID's?
>

User and group names were simple strings. There were no numeric UIDs
associated with processes, though the original file server had a
user<->integer mapping for directory entries.

   * Do you assume that all of the machines in your distributed
>        computation system belong to the same administrative domain?
>

Not necessarily, no. The model is one of resource sharing, rather than
remote access. You usually pull resources into your local namespace, and
those can come from anywhere you have access to take them from. They may
come from a different "administrative domain". For example, towards the end
of the BTL reign, there was a file server at Bell Labs that some folks had
accounts on and that one could "import" into one's namespace. That was the
main distribution point for the larger community.

       What if individuals owning their own workstations want to have
>        system administrator privs on their system?


When a user logs into a Plan 9 terminal, they become the "host owner" of
that terminal. The host owner is distinguished only in owning the hardware
resources of the hosting machine; they have no other special privileges,
nor is there an administrative user like `root`. It's slightly unusual,
though not unheard of, for a terminal to have a local disk; the disk device
is implicitly owned by the host owner. If the user puts a filesystem on
that device (say, to cache a dataset locally or something), that's on them,
though host owner status doesn't really give any special privileges over
the permissions on the files on that filesystem, modulo them going through
the raw disk device, of course. That is, the uid==0 implies you bypass all
access permission checking is gone in Plan 9. CPU servers have a mechanism
where a remote user can start a process on the server that becomes owned by
the calling user; this is similar to remote login, except that the user
brings their environment with them; the model is more of importing the CPU
server's computational resources into the local environment than, say,
ssh'ing into a machine.

Or is your
>        distributed OS a niche system which only works when you have
>        clusters of machines that are all centrally and
>        administratively owned?
>

I'm not sure how to parse that; I mean, arguably networks of Unix machines
associated with any given organization are "centrally owned and
administered"? To get access to any given plan9 network, someone would have
to create an account on the file and auth servers, but the system was also
installable on a standalone machine with a local filesystem. If folks
wanted to connect in from other types of systems, there were mechanisms for
doing so: `ssh` and `telnet` servers were distributed and could be used,
though the experience for an interactive user was pretty anemic. It was
more typical to use a program called `drawterm` that runs as an application
on e.g. a Mac or Unix machine and emulates enough of a Plan 9 terminal
kernel that a user can effectively `cpu` to a plan9 CPU server. Once logged
in via drawterm, one can run an environment including a window system and
all the graphical stuff from there.

Perhaps the aforementioned Bell Labs file server example clarifies things a
bit?

   * What scale should the distributed system work at?  10's of machines
>        in a cluster?   100's of machines?  1000's of machines?
>        Tens of thousands of machines?


This is, I think, where one gets to the crux of the problem. Plan 9 worked
_really_ well for small clusters of machines (10s) and well enough for
larger clusters (up to 100s or 1000s).

Distributed systems that work
>        well on football-sized data centers may not work that well
>        when you only have a few racks in colo facility.  The "I forgot
>        how to count that low" challenge is a real one...


And how. Plan 9 _was_ eventually ported to football-field sized machines
(the BlueGene port for DoE was on that scale); Ron can be able to speak to
that in more detail, if he is so inclined. In fairness, I do think that
required significant effort and it was, of course, highly specialized to
HPC applications.

My subjective impression was that any given plan9 network would break down
at the scale of single-digit thousands of machines and, perhaps, tens of
thousands of users. Growing beyond that for general use would probably
require some pretty fundamental changes; for example, 9P (the file
protocol) includes a client-chosen "FID" in transactions related to any
open file, so that file servers must keep track of client state to
associate fids to actual files, whether those file refer to disk-resident
stable storage or software synthesized "files" for other things (IPC end
points; process memory; whatever).

There have been many, many proposals in the distributed computing
> arena which all try to answer these questions differently.  Solaris
> had an answer with Yellow Pages, NFS, etc.  OSF/DCE had an answer
> involving Kerberos, DCE/RPC, DCE/DFS, etc.  More recently we have
> Docker's Swarm and Kubernetes, etc.  None have achieved dominance, and
> that should tell us something.
>
> The advantage of trying push all of these questions into the OS is
> that you can try to provide the illusion that there is no difference
> between local and remote resources.


Is that the case, or is it that system designers try to provide uniform
access to different classes of resources? Unix treats socket descriptors
very similar to file descriptors very similar to pipes; why shouldn't named
resources be handled in similar ways?

But that either means that you
> have a toy (sorry, "research") system which ignores all of the ways in
> which remote computation which extends to a different node that may or
> may not be up, which may or may not have belong to a different
> administration domain, which may or may not have an adversary on the
> network between you and the remote node, etc.  OR, you have to make
> access to local resources just as painful as access to remote
> resources.  Furthermore, since supporting access remote resources is
> going to have more overhead, the illusion that access to local and
> remote resources can be the same can't be comfortably sustained in any
> case.
>

...or some other way, which we'll never know about because no one thinks to
ask the question, "how could we do this differently?" I think that's the
crux of Mothy's point.

Plan 9, as just one example, asked a lot of questions about the issues you
mentioned above 30 years ago. They came up with _a_ set of answers; that
set did evolve over time as things progressed. That doesn't mean that those
questions were resolved definitively, just that there was a group of
researchers who came up with an approach to them that worked for that group.

What's changed is that we now take for granted that Linux is there, and
we've stopped asking questions about anything outside of that model.

When you add to that the complexities of building an OS that tries to
> do a really good job supporting local resources --- see all of the
> observations in Rob Pike's Systems Software Research is Dead slides
> about why this is hard --- it seems to me the solution of trying to
> build a hard dividing line between the Local OS and Distributed
> Computation infrastructure is the right one.
>
> There is a huge difference between creating a local OS that can live
> on a single developer's machine in their house --- and a distributed
> OS which requires setting up a directory server, and an authentication
> server, and a secure distributed time server, etc., before you set up
> the first useful node that can actually run user workloads.  You can
> try to do both under a single source tree, but it's going to result in
> a huge amount of bloat, and a huge amount of maintenance burden to
> keep it all working.


I agree with the first part of this paragraph, but then we're talking about
researchers, not necessarily unaffiliated open-source developers. Hopefully
researchers have some organizational and infrastructure support!

By keeping the local node OS and the distributed computation system
> separate, it can help control complexity, and that's a big part of
> computer science, isn't it?
>

I don't know. It seems like this whole idea of distributed systems built on
networks of loosely coupled, miniature timesharing systems has led to
enormous amounts of inescapable complexity.

I'll bet the Kubernetes by itself is larger than all of plan9.

        - Dan C.

[-- Attachment #2: Type: text/html, Size: 14379 bytes --]

  parent reply	other threads:[~2021-09-16 19:28 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-01 21:58 Dan Cross
2021-09-02  8:42 ` Tony Finch
2021-09-03  0:19   ` John Cowan
2021-09-03  3:24     ` Douglas McIlroy
2021-09-03 13:21       ` Theodore Ts'o
2021-09-08 11:14         ` Tony Finch
2021-09-16 19:27         ` Dan Cross [this message]
2021-09-17  0:34           ` Theodore Ts'o
2021-09-17  0:44             ` Larry McVoy
2021-09-17 17:07               ` Bakul Shah
2021-09-17  1:33             ` Dan Cross
2021-09-02 15:41 ` Kevin Bowling
2021-09-02 20:12   ` Marshall Conover
2021-09-03 15:56 ` Warner Losh
2021-09-03 17:10   ` Adam Thornton
2021-09-03 17:28     ` Larry McVoy
2021-09-03 17:42       ` John Floren
2021-09-03 19:02       ` Lawrence Stewart
2021-09-03 19:11       ` Clem Cole
2021-09-03 17:46     ` [TUHS] ATC/OSDI'21 joint keynote: It's Time for Operating Systems to Rediscover Hardware [ really a comment on SoCs ] Jon Steinhart
2021-09-16 18:38 ` [TUHS] ATC/OSDI'21 joint keynote: It's Time for Operating Systems to Rediscover Hardware (Timothy Roscoe) Dan Cross
2021-09-16 19:34   ` Jon Steinhart
2021-09-16 19:41     ` Larry McVoy
2021-09-16 23:14       ` Marshall Conover
2021-09-16 23:44         ` Rob Pike
2021-09-17  0:37           ` Larry McVoy
2021-09-17  1:38         ` Jon Steinhart
2021-09-17  3:54         ` John Cowan
2021-09-16 23:45       ` Jon Steinhart
2021-09-17  0:06         ` Al Kossow
2021-09-17  4:06           ` John Cowan
2021-09-17  4:18             ` Al Kossow
2021-09-17  0:32         ` Larry McVoy
2021-09-16 23:54       ` David Arnold
2021-09-17  1:10         ` Jon Steinhart
2021-09-17  1:28           ` Larry McVoy
2021-09-17  1:40             ` Jon Steinhart
2021-09-17  2:04               ` Larry McVoy
2021-09-17  2:21                 ` Jon Steinhart
2021-09-17  2:48           ` Theodore Ts'o
2021-09-17 17:39         ` Bakul Shah
2021-09-17 17:51           ` Jon Steinhart
2021-09-17 18:07             ` Larry McVoy
2021-09-17 21:03               ` Derek Fawcus
2021-09-17 22:11                 ` Larry McVoy
2021-09-19  4:05                   ` Theodore Ts'o
2021-09-17 18:34             ` Bakul Shah
2021-09-17 18:56               ` Jon Steinhart
2021-09-17 19:16                 ` Bakul Shah
2021-09-17 19:35                   ` Jon Steinhart
2021-09-17 15:56     ` Bakul Shah
2021-09-17 18:24       ` ron minnich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAEoi9W6AeNhaNWRXgYuPzfWAfc8btatMp=Wd=mYOscEyQK7Rng@mail.gmail.com' \
    --to=crossd@gmail.com \
    --cc=douglas.mcilroy@dartmouth.edu \
    --cc=tuhs@minnie.tuhs.org \
    --cc=tytso@mit.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).