9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
From: "Roman V. Shaposhnik" <rvs@sun.com>
To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
Subject: Re: [9fans] sendfd() on native Plan 9?
Date: Sat,  3 Jan 2009 13:23:02 -0800	[thread overview]
Message-ID: <1231017782.11463.193.camel@goose.sun.com> (raw)
In-Reply-To: <20090101235716.GC8355@masters10.cs.jhu.edu>

[ I took a liberty to merge two of your emails together for the ease of commenting ]

On Thu, 2009-01-01 at 18:57 -0500, Nathaniel W Filardo wrote:
> That is, the claim that "a process spawned without access to your home
> directory cannot get it" is flawed if that process runs as your user.
> (Even if I can't mount it, I can attach a debugger to a process that
> can and make it make system calls for me.  You now have to
> intermediate my #p (/proc) service.  You have to ensure that I can't
> dial it and authenticate with factotum.  It's a mess!)

It would seem to me that none of the issues that you've mentioned here
are insurmountable. As far as #p and #s are concerned the good news
under Plan9 is that they are supposed to be 'just servers' (the reality
of the situation is a bit more grim -- but I'd would like to discuss
this in the separate thread). As such, if the default implementation
seems too open, you can always harden its security semantics by
providing proxying user-level servers (well, that plus fixing all the
smart-ass apps -- more on that in a separate thread).

All in all, I don't see the above example as a fundamental obstacle to
what you're trying to accomplish.

> I may be tainted by the capability microkernel koolaid,
> but I don't like the idea that users are the sole security
> domain objects

How does your ideal world look like? What else, besides users and groups
of users would you like to have as security domain objects?

> I really dislike that I can build stronger security constructs when
> using multiple kernels rather than just one.  Especially in a system
> like Plan 9, where namespaces are cheap and capabilties exist in the
> form of file descriptors.
[....]
> That is, I want, essentially, to run multiple Plan 9 systems on one
> Plan 9 kernel without needing to get my whole domain to know about
> multiple instances of my user

The above two seem to be the most crucial statements in all of this
thread. If I were to interpret them, I would say that what you're really
after is being able to provide an OS-level virtualization solution
under Plan9.

The very same reasons were part of the motivation for implementing
Solaris containers/zones for Solaris 10.

Now, if one is to look past all the marketing hype and the amount
of thrust that was needed to make that particular pig fly, Solaris
Zones are all about convincing a group of processes that they run
under separate kernels on separate hosts, even though in reality they
all could very well share the common "big iron" server. Long story
short, it all boiled down to:
   1. Virtualizing root account, so that every Zone can have its own
   2. Virtualizing the global UNIX namespace
   3. Virtualizing TCP/IP stack
   4. Virtualizing drivers providing access to the service provided
      by the kernel itself (things like /proc, etc.)
   5. Implementing per-Zone quotas on CPU, Memory and I/O consumption

Now, under Plan9, #1-3 come for free. I'm not sure how #5 is possible
(anybody?). And thus #4 is the only are that needs work. That work,
is really all about teaching things like #s and #p to only disclose
information about processes running within the same "zone". Whether
"same zone" means "same process group" or an extra level of process
grouping is needed remains to be seen.

> > > supposing that you have a fully jailed process.  if it has a connection
> > > to the fileserver, which does do security by user id, the jailed process
> > > can still mess with you.  say by deleting all your files.
>
> Yes, exactly.  I don't understand the question?

The question, as I understood it, was about the practicality of such
attack. For every fully jailed process that belongs to the user Foo,
the following holds true: either it has no existing channels to the
external file servers and no means to create them, or it can obtain
a channel (either existing or a new one). From the point it connects
*and* passes the auth process, you are constrained by the security
model of 9P. Users and user groups are the only thing that is there.

> > > i think the real question here is why don't you trust your
> > > processes?  is it because someone else is running them
> >
> > That was, essentially, my original question. Nathaniel, could you,
> > please answer it?
>
> I'm looking at a system like 9gridchan

What is 9gridchan?

> where an essentialy autonomous agent publishes services.

How does that publishing happen? If the system is distributed,
I'd presume that the publishing happens in a network-friendly
manner, thus making the whole discussion around #s pretty much
moot.

> Further, 9gridchan pollutes the namespace of /srv for everybody
> else on the system, and its current naming scheme makes it impossible to run
> two 9gridchan agents (w/o modification) even as different users.

I have to know more about 9gridchan agents in order to reason about
this intelligently. So far it seems to me that your general issue
with a global nature of /srv can be somewhat mitigated by following
a particular naming convention for files under /srv. More on that
later.

> I also want, I think, an extension to 9gridchan where I can publish a
> service which relays for other 9gridchan nodes, which essentially means that
> remote machines are directing the contents of my /srv.  (I want this so that
> I can bounce around NATs or loss of direct connectivity in the 9gridchan
> mesh... other proposals for solving this problem welcome.)

Again, I have to know more about 9gridchan and how the publishing works.

> However, /srv is currently the only mechanism (AFAIK) for a fd in a pgrp A
> to be given to a process in pgrp B.  Therefore, if /srv is to be virtualized
> (made a property of namespaces) ala /tmp, then something like sendfd() seems
> to be necessary to replace this functionality when required.

Now, just to be clear: this requirement is quite different from
your earlier OS-level virtualization, since you're trying to
introduce an IPC between processes which were specifically told
to behave as though the only IPC available to them is networking.

That said, the requirement itself makes sense. Although, even with
sendfd() you have to have a socket connection established before you can
pass fd's around. Except for the case where such socket connection
was created using socketpair() the sockets are *named* objects.
And that should present all the same problems that naming files
under /srv presents, shouldn't it?

IOW, I'd be very curious to see a proposal from you on how the
channel for this imaginary sendfd() is supposed to be managed so that
it gets rid of all the issues you've identified with /srv.

> Does that help answer the question?

Well, it definitely helped me understand what is it that you're after.

Thanks,
Roman.




  reply	other threads:[~2009-01-03 21:23 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-12-23 18:01 Nathaniel W Filardo
2008-12-23 22:52 ` Rodolfo kix Garcia
2008-12-23 23:53   ` Francisco J Ballesteros
2008-12-24  1:10     ` Nathaniel W Filardo
2008-12-24  1:39       ` erik quanstrom
2008-12-24  3:00         ` Nathaniel W Filardo
2008-12-24  4:14           ` erik quanstrom
2008-12-24  7:36             ` Nathaniel W Filardo
2008-12-24 13:36               ` erik quanstrom
2008-12-27 20:27                 ` Roman Shaposhnik
2008-12-27 20:34                   ` Eric Van Hensbergen
2008-12-27 20:21       ` Roman Shaposhnik
2008-12-30  8:22         ` Nathaniel W Filardo
2008-12-30 15:04           ` Eric Van Hensbergen
2008-12-30 15:31           ` erik quanstrom
2009-01-01 22:53             ` Roman V. Shaposhnik
2009-01-01 23:57               ` Nathaniel W Filardo
2009-01-03 21:23                 ` Roman V. Shaposhnik [this message]
2009-01-03 21:41                   ` erik quanstrom
2009-01-03 21:59                     ` Roman V. Shaposhnik
2009-01-03 23:57                   ` Nathaniel W Filardo
2009-01-04  5:19                     ` lucio
2009-01-04  5:48                       ` erik quanstrom
2009-01-04  6:10                         ` Nathaniel W Filardo
2009-01-04  6:43                           ` lucio
2009-01-05  1:12                             ` Roman V. Shaposhnik
2009-01-05  1:32                               ` erik quanstrom
2009-01-05  3:48                                 ` lucio
2009-01-04 17:32                           ` erik quanstrom
2009-01-04 18:23                             ` lucio
2009-01-05  1:24                               ` Roman V. Shaposhnik
2009-01-04  5:58                       ` Nathaniel W Filardo
2009-01-04  6:26                         ` lucio
2009-01-04 15:46                           ` erik quanstrom
2009-01-05  4:30                     ` Roman V. Shaposhnik
2008-12-24  1:17   ` Nathaniel W Filardo
2008-12-27 17:06 ` Russ Cox

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1231017782.11463.193.camel@goose.sun.com \
    --to=rvs@sun.com \
    --cc=9fans@9fans.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).