The Unix Heritage Society mailing list
 help / color / mirror / Atom feed
From: Paul Ruizendaal <>
To: The Eunuchs Hysterical Society <>
Subject: [TUHS] Re: PCS Munix kernel source
Date: Sat, 13 Aug 2022 22:40:26 +0200	[thread overview]
Message-ID: <> (raw)
In-Reply-To: <>

> On Aug 11, 2022, at 11:01 AM, Holger Veit <> wrote:
>> My understanding so far (from reading the paper a few years ago) is that Newcastle Connection works at the level of libc, substituting  system calls like open() and exec() with library routines that scan the path, and if it is a network path invokes user mode routines that use remote procedure calls to give the illusion of a networked kernel. I’ve briefly looked at the Git repo, but I do not see that structure in the code. Could you elaborate a bit more on how Newcastle Connection operates in this kernel? Happy to communicate off-list if it goes in too much detail.
> Maybe the original NC did so, but here there are numerous additions to
> the kernel, including a new syscall uipacket() which is the gateway into
> the MUNET/NC implementation. Stuff is in /usr/sys/munet, the low level
> networking is in uipacket.c and uiswtch.c which will process RPC
> open/close/read/write/exec etc. calls, as well support master and
> diskless nodes). The OS code is full of "if (master) {...} else {...}'
> code which then redirects detected access to remote paths to the network
> handler.

I came across a later paper for Unix United / Newcastle Connection. It seems to consider moving parts of the code into the kernel, to combat executable bloat (the relevant libc code would be copied into every executable, prior to shared libraries):

>> Re-reading the Newcastle Connection paper also brought up some citations from Bell Labs work that seems to have been lost. There is a reference to “RIDE” which appears to be a system similar to Newcastle Connection. The RIDE paper is from 1979 and it mentions that RIDE is a Datakit re-implementation of earlier an earlier system that ran on Spider. Any recollections about these things among the TUHS readership?
>> The other citation is for J. C. Kaufeld and D. L. Russell, "Distributed UNIX System", in Workshop on Fundamental Issues in Distributed Computing, ACM SIGOPS and SIGPLAN (15-17 Dec. 1980). It seems contemporaneous with the Luderer/Marshall/Chu work on S/F-Unix. I could not find this paper so far. Here, too, any recollections about this distributed Unix among the TUHS readership?

> Good to mention this. I am interested in such stuff as well.

I found a summary of that ACM SIGOPS workshop. There is a half page summary about David Russels presentation on “Distributed Unix”:


2.4 David Russell: Distributed UNIX

Distributed UNIX is an experimental system consisting of a collection of machines running a modified version of UNIX.

Communication among processors is performed by a packet switching network built on Datakit hardware. The network has
a capacity of roughly 7.5 megabits, shared among several channels. Addressing is done by channel, not by target
processor. Messages are received in the order sent. To the user, communication takes the form of a virtual circuit,
a logical full-duplex connection between processors.

A virtual circuit can be modeled as a box with four ends: DIN and DOUT control data transmission, and CIN and COUT
control transmission of control information. Circuits can be spliced together, or attached to devices. Circuits can
also be joined into groups. A virtual circuit is set up in the following way: a socket is created with a globally
unique name, and processes then request connections to the named socket. Routing information is implicit. Virtual
circuits support location transparency, since sockets can move, but not replication transparency. If a circuit breaks,
it is set up again, although it is up to the user to handle recovery of state information. Machine failure will destroy
a virtual circuit.

A transparent distributed file system was set up. When a remote file is accessed, a socket name is generated, which is
used to establish a connection with a daemon at the file server processor. The daemon carries out the file access on
behalf of the remote user. In conclusion, offloading of tasks was found to work well. The path manager maintains very
little state. The splice interface between virtual circuits was found to be very efficient, although UNIX scheduling
was not appropriate for fast setup of circuits.


The modeling of virtual circuits sounds like a mid-point between Chesson’s multiplexed files and Ritchie’s streams.

The file system actually sounds like it could be the RIDE system. Maybe this Distributed Unix and RIDE are one and the same thing.
(although the original Newcastle Connection paper suggests they are not:


  reply	other threads:[~2022-08-13 20:41 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-11  6:04 [TUHS] " Paul Ruizendaal via TUHS
2022-08-11  9:01 ` [TUHS] " Holger Veit
2022-08-13 20:40   ` Paul Ruizendaal [this message]
2022-08-11 13:32 ` Nelson H. F. Beebe
  -- strict thread matches above, loose matches on Subject: below --
2022-08-10 10:29 [TUHS] " Holger Veit
2022-08-10 16:12 ` [TUHS] " Warner Losh
2022-08-10 17:15 ` emanuel stiebler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).