On 7/14/22 04:08, Lars Brinkhoff wrote:
> Hello,
>
> Unix V8 has some code for Chaosnet support. There is a small hobbyist
> Chaos network going with Lispm, PDP-10, PDP-11, and Berkeley Unix nodes.
> Is there anyone who would be interested in trying to see if the V8 code
> is in a workable state, and get it running?
>
> Best regards,
> Lars Brinkhoff
A network of lisp machines and PDP-10/11 systems seems pretty cool... is
there a web site for the network?
john
John Floren wrote: > A network of lisp machines and PDP-10/11 systems seems pretty > cool... is there a web site for the network? Yes, it's here: https://chaosnet.net/ And there's a GitHub organization: https://github.com/Chaosnet/ To keep this on topic, known Unix implementations of Chaosnet include 4.1BSD (up and running), V8 (available but not running), and V7 (not found yet).
Given V8 being rebased on 4(.1?)BSD, I suspect the path of least resistance would be to just start grafting V8 code onto the working 4.1BSD.
What sort of help are you looking for? I've got idle fingers in the evenings lately, if you just need some code junkies to work on things I'm happy to throw my hat in the ring.
- Matt G.
------- Original Message -------
On Thursday, July 14th, 2022 at 10:00 AM, Lars Brinkhoff <lars@nocrew.org> wrote:
> John Floren wrote:
>
> > A network of lisp machines and PDP-10/11 systems seems pretty
> > cool... is there a web site for the network?
>
>
> Yes, it's here:
> https://chaosnet.net/
>
> And there's a GitHub organization:
> https://github.com/Chaosnet/
>
> To keep this on topic, known Unix implementations of Chaosnet include
> 4.1BSD (up and running), V8 (available but not running), and V7 (not
> found yet).
Note, I don’t know what you’re planning, but Chaos couldn’t take any propagation delay. It’s really limited to a LAN implementation as originally designed.
segaloco wrote: > What sort of help are you looking for? I've got idle fingers in the > evenings lately, if you just need some code junkies to work on things > I'm happy to throw my hat in the ring. I'm mainly curious if the V8 code works. I haven't examined it at all, so I have no idea. For reference, I have a disk image with 4.1BSD patched and ready to run with SIMH here: http://lars.nocrew.org/tmp/Chaotic-4.1BSD.tar.bz2 I collected all the bits and pieces here and intended to make an expect script to install, patch, and build everything. I didn't the script but all the stuff should be here: https://github.com/Chaosnet/Chaosnet-for-4.1BSD
On 7/14/22 2:19 PM, Ron Natalie wrote:
> Note, I don’t know what you’re planning, but Chaos couldn’t take any
> propagation delay. It’s really limited to a LAN implementation as
> originally designed.
>
It definitely had subnet routing, and as I recall, the KL10's and other
machines with front end I/O processors generally used chaosnet routing
between the host itself and the rest of the network. i.e. the I/O
processor was on one subnet, the host on a second subnet and the rest of
the "LAN" was on the other side of the I/O processor. My recollection is
that unlike an IP router, a Chaosnet node had only one address, and
routing tables determined which device to send the data on.
And LCS definitely had multiple coax cable runs with each run a subnet
with routing between. But with a maximum of 256 subnets, routing was
much simpler.
I wonder how much benefit is available from using network switches
rather than collision detection and retransmit, though the virtual token
was supposed to reduce collisions somewhat.
It can do subnets, but it can’t deal with long haul (over the greater
internet) time delays.
------ Original Message ------
From "Tom Teixeira" <tjteixeira@earthlink.net>
To tuhs@tuhs.org
Date 7/14/2022 4:32:55 PM
Subject [TUHS] Re: Unix V8 Chaosnet, any takers?
>On 7/14/22 2:19 PM, Ron Natalie wrote:
>>Note, I don’t know what you’re planning, but Chaos couldn’t take any propagation delay. It’s really limited to a LAN implementation as originally designed.
>>
>It definitely had subnet routing, and as I recall, the KL10's and other machines with front end I/O processors generally used chaosnet routing between the host itself and the rest of the network. i.e. the I/O processor was on one subnet, the host on a second subnet and the rest of the "LAN" was on the other side of the I/O processor. My recollection is that unlike an IP router, a Chaosnet node had only one address, and routing tables determined which device to send the data on.
>
>And LCS definitely had multiple coax cable runs with each run a subnet with routing between. But with a maximum of 256 subnets, routing was much simpler.
>
>I wonder how much benefit is available from using network switches rather than collision detection and retransmit, though the virtual token was supposed to reduce collisions somewhat.
>
This looks like it might be exactly what you're looking for: http://9legacy.org/9legacy/doc/simh/v8 Steps to start with a 4.1BSD base in simh and use that to ultimately produce a V8 system. I haven't audited these instructions myself, so YMMV, but I suspect someone wouldn't go through the hard work if this didn't do anything. That said, your original posting mentions the PDP-11, but also "Berkeley Unix Nodes". For the latter, do you mean VAX? These instructions are for VAX too, I don't know whether V8 ran on PDP-11 or not, but if that's your intent, you may want to start with a 2.xBSD or V7 as a base instead. - Matt G. ------- Original Message ------- On Thursday, July 14th, 2022 at 12:36 PM, Lars Brinkhoff <lars@nocrew.org> wrote: > segaloco wrote: > > > What sort of help are you looking for? I've got idle fingers in the > > evenings lately, if you just need some code junkies to work on things > > I'm happy to throw my hat in the ring. > > > I'm mainly curious if the V8 code works. I haven't examined it at all, > so I have no idea. > > For reference, I have a disk image with 4.1BSD patched and ready to > run with SIMH here: > http://lars.nocrew.org/tmp/Chaotic-4.1BSD.tar.bz2 > > I collected all the bits and pieces here and intended to make > an expect script to install, patch, and build everything. I didn't > the script but all the stuff should be here: > https://github.com/Chaosnet/Chaosnet-for-4.1BSD
[-- Attachment #1: Type: text/plain, Size: 942 bytes --] On 7/14/22 2:39 PM, Ron Natalie wrote: > It can do subnets, but it can’t deal with long haul (over the greater > internet) time delays. I wonder if there is an opportunity for something that pretends to be the remote side locally, sends the data via some other non-latency-sensitive protocol to a counter part where the counter part pretends to be the near side. Local / / [A]----[B]==/==[C]---[D] / / Remote Where ---- is Chaosnet over a short distance and ==/== is something else over a long distance. B would pretend to be D so that A could talk to D' in a timely manner and conversely C would pretend to be A so that A' could talk to D in a timely manner. I've seen such spoofing / emulation in other protocols. Maybe the prior art can work for Chaosnet. P.S. Hopefully my ASCII art will survive the trip. -- Grant. . . . unix || die [-- Attachment #2: S/MIME Cryptographic Signature --] [-- Type: application/pkcs7-signature, Size: 4017 bytes --]
Grant, You've just described a useful piece of old ARPANET hardware: the Advanced Communications Corp (ACC) Error Control Unit (ECU). ARPANET IMP 1822 (serial) interface could be "local host" (30 feet, unbalanced serial), or "distant host" (up to 2,000 feet, balanced serial). One used a pair of ACC ECUs - one at the IMP end, one at the host end, with potentially arbitrary distance inbetween the ECUs, so as to obviate the 1822 LH/DH limits. I managed such a setup at LLNL in 1986 (MILNET, IMP #21): when I was hired, the group I hired into (well, put under contract to by CDC Professional Services) was on-site at the lab, but as the wires ran between the old AEC instrument trailer that was our machine room for a VAX-11/780 and PDP-11/70 (both running BSD, natch) and the Magnetic Fusion Energy Computer Center (MFE CC) where the IMP was located was rather longer than 1822 DH could handle. A 3002 circuit ("dry" pairs) and a pair of LADDS high-speed modems did the trick there. Later, my group moved to the Hacienda Business Park in Pleasanton, some miles away; we set up a Pac*Bell 56Kb/s (DS0) leased line with standard CSU/DSUs to connect the ACC ECUs and in turn the host (well, router) to our port on the IMP. I had some trouble getting that one going again because the ACC ECU manuals were ... disjoint: simple recipies for setting DIP switches (with no explanation of why), and a complete schematic in the back, which was useless to me because I'm not an EE, but the documented switch settings for our desired setup didn't work. ACC sent two engineers to our site from Santa Barbara to solve the problem - the senior one was the last engineer to issue an Engineering Change Order (ECO) on the ECUs. To bring this back to a Unix context, that sort of "spoofers in the middle" was also the shtick of Telebit Trailblazer modems for the UUCP "g" protocol in UUCP/USENET days - 19.2Kb/s in one direction at a time, and the modems "knew" the "g" protocol and spoofed it for maximum speed in one direction, which was the way UUCP worked too: file transfers were handled one direction at a time, and just ACKs coming back. Internally, they effectively provided an optimized UUCP tunnel atop their quite tenacious Packetized Ensemble Protocol (PEP). Erik
> Message: 6
> Date: Thu, 14 Jul 2022 17:51:39 +0000
> From: segaloco <segaloco@protonmail.com>
>
> Given V8 being rebased on 4(.1?)BSD, I suspect the path of least resistance would be to just start grafting V8 code onto the working 4.1BSD.
I doubt that V8 is "rebased on 4(.1?)BSD": in my understanding it ported some code from 4xBSD, but it is a different code base.
As I currently understand it, the V8 kernel:
- is a further development from 32V
- retains the code organisation of the V5..32V versions
- adds virtual memory code from BSD
- adds select() from BSD
and then adds all the V8 innovation on top of that (streams, file system switch, virtual proc file system, networking, remote file system, support for the Blit terminal, etc.)
In particular in the area of networking the V8 kernel is organised quite differently from the 4xBSD kernel, including the Chaosnet driver (i.e. it is streams based).
Paul
> From: Lars Brinkhoff > There is a small hobbyist Chaos network going What encapsulation are they using to transmit CHAOS packets (over the Internet, I assume)? I know there was an IP protocol number assigned for CHAOS (16.), but I don't recall if there was ever a spec? (Which is kind of amusing, since in 'Assigned Numbers', the person responsible for 16. is .... me! :-) There was a spec for encapsulating IP in CHAOS, and that actually _was_ implemented at MIT BITD; it was used for a while to get IP traffic to a Unix machine (V7, IIRC) over on main campus, at a stage when only CHAOS hardware (very confusing that the same name was applied to hardware, and a protocol suite) ran across to main campus from Tech Square. > From: Grant Taylor > I wonder if there is an opportunity for something that pretends to be > the remote side locally, sends the data via some other > non-latency-sensitive protocol to a counter part where the counter part > pretends to be the near side. Let's think through the details. The near-end 'invisibility box' (let's call it) is going to have to send acks to the original source machine, otherwise that will time out, abort the connection, etc. The originating machine is its own thing, and this behaviour can't be controlled/stopped. (This, BTW, shows a key difference between 'local' and 'across network' modes, a recent topic here; in a situation where two distinct machines are cooperating across a network, the other machine is its own entity, and can't be expected/guaranteed to do anything in particular at all.) In addition, the near-end invisibility box is going to have to keep a copy of each packet, until the destination machine sends an ack to the invisibility box - because if the packet is lost, the invisibility box is going to have to retransmit it. (The original source is not going to - it's already gotten an ack, so as far as it's concerned, that packet is done and dusted.) And the near-end invisibility box is also going to have to have to have a time-out timer, so that when the ack isn't seen, it will know to retransmit the packet. There's no point to _also_ sending the acks on to the originating machine; they won't do anything useful, and might just confuse it. So, let's see now: the near-end invisibility box buffers the packet, looks for an ack, times out when it doesn't see it, re-transmits the packet - that sounds familiar? Oh, right, it's a reliable connection. And presumably there's an invisibility box at the far end too, so the same can happen for any data packets the destination machine sends. The only question is whether, when doing the detailed design, there's any point to having the destination machine's acks sent to the near-end invisibility box - or just use them at the far-end invisibility box. (I.e. create three reliable connections: i) a CHAOS connection originating machine<->near-end invisibility box; ii) {whatever} near-end invisibility ox<->far-end invisibility box; iii) CHAOS far-end invisibility box<->original destination machine - and glue them together.) That is in some sense simpler than creating an extra-ordinary mechanism to have a 'helper' box in the middle of the connection (to terminate the CHAOS connection from the original machine, and have another CHAOS connection, but with an enhanced time-out mechanism which can cope with larger RTT's, from there to the original destination); and the same in the other direction. The amusing thing is that the CHAOS protocol stack actually already had something very similar to this, BITD. If one were sitting at a machine that only had CHAOS, and one wanted to (say) TELNET to an ARPANET host, there was a CHAOS service which ARPANET hosts on the CHAOSNET provided: open a CHAOS connection to that serrver, and it would open an NCP connection to one's intended ultimate destination, and glue the byte streams together. (The ultimate destination was passed as data over the CHAOS connection, when opening it.) Noel
On Fri, Jul 15, 2022 at 06:25:37AM -0400, Noel Chiappa wrote:
>
> There was a spec for encapsulating IP in CHAOS, and that actually _was_
> implemented at MIT BITD; it was used for a while to get IP traffic to a Unix
> machine (V7, IIRC) over on main campus, at a stage when only CHAOS hardware
> (very confusing that the same name was applied to hardware, and a protocol
> suite) ran across to main campus from Tech Square.
As I recall it was possible to access Chaosnet from Project Athena
VAX/750's running BSD 4.3, so presumably Chaosnet was ported into the
BSD 4.3 kernel at one point. It wasn't used for anything official,
but there were folks who were using it to access Tech Square machines
from the SIPB office in 11-205 (on the main campus) as late as 1991 or
so.
- Ted
I had an ECU connecting the BRL-GATEWAY to the Imp in another building. Rutgers also used an ECU to get their internet connection of the Columbia IMP. When visiting BBN’s NOC I noted little red squares on the network map for my site. I was told these signified an ECU and it is because it causes problems when the host crashes. Let me explain this quaint piece of operation. Both the IMP and the LH or DH host interface have a relay that indicates the interface is powered up. The idea is that if you shutdown the machine (or the imp) the relay fails open. That’s fine for a gross shutdown, but let’s say the machine just hangs. The IMP has a feature called “TARDY DOWN.” If you stop accepting packets for a while, the thing declares you “TARDY DOWN” (effectively down for the rest of the net). This is signified on the later C-30 imps by the display register bit for that host flashing. Now the IMP says, “Hey, let me try to wake this guy up.” It does this by dropping its relay for a second hoping to trick the host into thinking it has rebooted and it should reinitialize. Of course, this gambit rarely works, but if the host did wake up, if it starts processing messages again, the IMP says “All is well” and declares the host up. Now, what if you had an ECU. Well, when the IMP flaps it’s relay, the ECU says “Hey, something happened to the IMP.” I better reset everything including any buffered packets. The ECU actually buffers three packets or so while it is transferring them to the other end. Well, now that the buffer is empty, it can take three more packets. The IMP declares it up. Of course, the host isn’t really there, so the packets just stay there and then the ECU fills up and stops answering the IMP and the IMP eventually declares it TARDY DOWN again. Lather, Rinse, Repeat. This wouldn’t be so bad except that every time there’s a transition a message is printed out in the NOC. So a dead host on an ECU causes a couple of messages to be printed every three minutes until someone does something about it. Later, we had similar fun and games with the EGP protocol. The protocol suggest you not declare the peer up until after you receive n (where n = 3) hello messages. I’m more optimistic than that. My implementation sent a “OK, You’re up” response after a single HELLO. Of course, this pissed off some other client who said, I can’t be up after only one HELLO, I better start the whole thing over again. Lather, Rinse, Repeat.
segaloco wrote: > This looks like it might be exactly what you're looking for: > http://9legacy.org/9legacy/doc/simh/v8 Thanks! > That said, your original posting mentions the PDP-11, but also > "Berkeley Unix Nodes". For the latter, do you mean VAX? Yes, they are VAX-11/780 SIMH instances running 4.1BSD + MIT patches. > I don't know whether V8 ran on PDP-11 or not, but if that's your > intent, you may want to start with a 2.xBSD or V7 as a base instead. The Chaosnet documentation says there were PDP-11 Unix V7 nodes on the network. That code has not been found though. Same goes for VAX/VMS.
Noel Chiappa wrote:
> > There is a small hobbyist Chaos network going
> What encapsulation are they using to transmit CHAOS packets (over the
> Internet, I assume)?
There are several methods to encapsulate packets for transmission
locally or over the Internet:
- EtherType 0x0804
- IP protocol 0x10 (yours!)
- Unix domain socket
- UDP
- TLS over TCP/IP (preferred transport across Internet)
There is a bridge/router that understands all these, written by
professor Björn Victor. Various computer hardware and emulators support
different encapsulations: real Lispms use Ethernet, an LMI Lambda
emulator uses IP, KLH10 and SIMH use the UDP method, and the PDP-10/X
FPGA uses a Unix socket.
[-- Attachment #1: Type: text/plain, Size: 850 bytes --] On Sat, Jul 16, 2022, 12:38 AM Lars Brinkhoff <lars@nocrew.org> wrote: > segaloco wrote: > > This looks like it might be exactly what you're looking for: > > http://9legacy.org/9legacy/doc/simh/v8 > > Thanks! > > > That said, your original posting mentions the PDP-11, but also > > "Berkeley Unix Nodes". For the latter, do you mean VAX? > > Yes, they are VAX-11/780 SIMH instances running 4.1BSD + MIT patches. > > > I don't know whether V8 ran on PDP-11 or not, but if that's your > > intent, you may want to start with a 2.xBSD or V7 as a base instead. > > The Chaosnet documentation says there were PDP-11 Unix V7 nodes on the > network. That code has not been found though. Same goes for VAX/VMS. > V7 could mean a modification of net unix or using that as a starting point. But without code that's just wild speculation on my part. Warner [-- Attachment #2: Type: text/html, Size: 1604 bytes --]
> From: Warner Losh > V7 could mean a modification of net unix What's "net unix" anyway? I know of the Net releases from CSRG, but this much precedes that. What did people with PDP-11 V7 who wanted TCP/IP do, anyway? Noel
[-- Attachment #1: Type: text/plain, Size: 1497 bytes --] On Sat, Jul 16, 2022 at 9:51 AM Noel Chiappa <jnc@mercury.lcs.mit.edu> wrote: > > From: Warner Losh > > > V7 could mean a modification of net unix > > What's "net unix" anyway? I know of the Net releases from CSRG, but this > much precedes that. > I'm referring to the University of Illinois distribution that's in TUHS as Distributions/Early_Networking/NOSC. Steve Holmgren led the effort, and it was quite popular. It was V6 based (it came out in late V5 time frame, just before V6 was released and quickly updated, the version we have appears to be based on a pure V6 release). I have seen references to it in the ARPAnet census documents running on both V6 and V7 (though mostly they were silent about which version). It was NCP, not TCP/IP. I thought this was the normal nomenclature of the time, but I may be mistaken. > What did people with PDP-11 V7 who wanted TCP/IP do, anyway? > We also have BBN's stack as well in Distributions/Early_Networking/BBN. It appears to be for an 11/40, 11/45 or 11/70, though that's based purely on different files having '70', '45' and '40' in their name. These are also based on V6 (at least the code we have has a V6 layout and the few files I diff'd to V6 were pretty close). There's also an early VAX version in that directory as well. One could adapt either of these to V7, and I seem to recall seeing references to people that did that in the stuff I read for some of my early Unix talks, but I can't seem to find it right now. Warner [-- Attachment #2: Type: text/html, Size: 2250 bytes --]
Apologies for being off-topic > What did people with PDP-11 V7 who wanted TCP/IP do, anyway? Taking it slightly broader (PDP-11 instead of V7), there is a lot of discussion about that on Mike Meuss’ TCP-digest mailing list: https://ftp.ripe.net/rfc/museum/tcp-ip-digest/ There is a 1985 index of available implementations as well ( https://ftp.ripe.net/rfc/museum/tcp-ip-implementations.txt.1 ). It includes the following options for PDP-11 systems: 1.7.5. UNIX 2.9BSD DESCRIPTION: 2.9BSD TCP/IP is an adaptation of Berkeley's original VAX TCP/IP (running under BSD 4.1B UNIX) which in turn is an offshoot of BBN's VAX TCP/IP. 2.9BSD TCP/IP runs on PDP-11/44s and PDP-11/70s. The 2.8 version from SRI was adapted by Bill Croft (formerly at SRI), then Tektronix adapted it for 2.9. Berkeley took over modification of the software and brought it back to SRI where Dan Chernikoff and Greg Satz adapted it for a later release of 2.9. In addition to TCP/IP, UDP, ARP and the raw packet interface is available. ICMP redirects are not supported. User software implementations include Telnet and FTP, plus Berkeley-developed local net protocols, RWHO, RSH, RLOGIN, and RCP. 2.9BSD with TCP/IP support could probably be made to run on smaller PDP-11s although the address space would be very tight and might present problems. 1.7.6. Venix/11 TCP/IP DESCRIPTION: This is based on the "PDP-11/45" implementation available from the MIT Laboratory for Computer Science. It has been ported to a V7 UNIX system, in particular VenturCom's Venix/11 V2.0. As little of the processing as possible takes place in the kernel, to minimize the code space required. It fits comfortably on I&D machines, but is almost hopeless on the smaller machines. The kernel includes a proNET device driver, IP fragment reassembly, IP header processing, local-net header processing, and simple routing. The rest of the IP processing, and all of the UDP and TCP functions, are in user libraries. The psuedo-teletype driver is also in the kernel, and is used by Server TELNET. User programs handle ICMP processing; User and Server TELNET, SMTP, TFTP, Finger, and Discard. There are User programs for Nicname and Hostname. IEN-116 nameservers are used by all programs, and an IEN-116 nameserver is also provided. The TCP used is very simple, not very fast, and lies about windows. No FTP is available, nor is one currently planned. 1.7.8. BBN-V6-UNIX DESCRIPTION: This TCP/IP/ICMP implementation runs as a user process in version 6 UNIX, with modifications obtained from BBN for network access. IP reassembles fragments into datagrams, but has no separate IP user interface. TCP supports user and server Telnet, echo, discard, internet SMTP mail, and FTP. ICMP generates replies to Echo Requests, and sends Source-Quench when reassembly buffers are full. 1. Hardware - PDP-11/70 and PDP-11/45 running UNIX version 6, with BBN IPC additions. 2. Software - written in C, requiring 25K instruction space, 20K data space. Supports 10 connections (including "listeners"). 3. Unimplemented protocol features: - TCP - Discards out-of-order segments. - IP - Does not handle some options and ICMP messages. 1.7.9. v 3COM-UNET DESCRIPTION: UNET is a communication software package which enables UNIX systems to communicate using TCP/IP protocols. UNET will utilize any physical communications media, from low speed links such as twisted pair RS-232C to high speed coaxial links such as Ethernet. All layers of the UNET package are directly available to the user. The highest layer provides user programs implementing ARPA standard File Transfer Protocol (UFTP), Virtual Terminal Protocol (UVTP), and Mail Transfer Protocols (UMTP). These programs in turn utilize the virtual circuit services of the TCP. The TCP protocol is implemented on top of the IP. Finally, IP can simultaneously interface to multiple local networks. UNET implements 5 of the 7 layers of the International Standards Organization Open Systems Interconnection Reference Model, layers 2 through 6: Link, Network, Transport, Session, and Presentation. Features of TCP 6 not yet implemented are Precedence and Security, End-of-Letter, and Urgent. Feature of IP 4 not yet implemented is Options. Of these, we have 2.9BSD and (a forerunner of) BBN-V6-Unix available on the TUHS Unix Tree. The Venix/11 source and the 3COM source appear lost. These (unfortunately) are the ones that were implemented on top of V7. Also, BBN back-ported the TCP/IP code of BBN VAX-TCP to V7 for their C/70 Unix.
> From: Warner Losh >> What's "net unix" anyway? > I'm referring to the University of Illinois distribution Ah, OK. > I have seen references to it in the ARPAnet census documents running on > both V6 and V7 (though mostly they were silent about which version). Well, V7 came out in January, 1979, and NCP wasn't turned off until January, 1983, so people had a lot of time to get it running under V7. > I thought this was the normal nomenclature of the time, but I may be > mistaken. I'm not sure what it was usually called; we didn't have much contact with it at MIT (although I had the source; I'm the one that provided it to TUHS). The problem was that although MIT had two IMPs, all the ports on them were spoken for, for big time-sharing mainframes (4 PDP-10's running ITS; 1 running TWENEX; a Multics), so there were no ports available to plug in a lowly PDP-11. (We were unable to get an IP gateway (router) onto the ARPANET until MIT got some of the first C/30 IMPs.) So we had no use for the NCP Unix (which I vaguely recall was described as 'the ARPANET Unix from UIll'). Noel
We had NCP running on the JHU kernel which essentially a V6 kernel with extra stuff (including the ability to mount both V6 and V7 disks). This was done on an 11/34 at Xmas time 1982 when the Arpanet went to long leaders and the UofIll ANTS we had became obsolete.
> On Jul 17, 2022, at 19:40, jnc@mercury.lcs.mit.edu wrote:
>
>
>>
>> From: Warner Losh
>
>>> What's "net unix" anyway?
>
>> I'm referring to the University of Illinois distribution
>
> Ah, OK.
>
>> I have seen references to it in the ARPAnet census documents running on
>> both V6 and V7 (though mostly they were silent about which version).
>
> Well, V7 came out in January, 1979, and NCP wasn't turned off until January,
> 1983, so people had a lot of time to get it running under V7.
>
>> I thought this was the normal nomenclature of the time, but I may be
>> mistaken.
>
> I'm not sure what it was usually called; we didn't have much contact with it
> at MIT (although I had the source; I'm the one that provided it to TUHS).
>
> The problem was that although MIT had two IMPs, all the ports on them were
> spoken for, for big time-sharing mainframes (4 PDP-10's running ITS; 1
> running TWENEX; a Multics), so there were no ports available to plug in a
> lowly PDP-11. (We were unable to get an IP gateway (router) onto the ARPANET
> until MIT got some of the first C/30 IMPs.) So we had no use for the NCP Unix
> (which I vaguely recall was described as 'the ARPANET Unix from UIll').
>
> Noel
> From: Ron Natalie > We had NCP running on the JHU kernel Which NCP implementation was that, the one from UIUC, or something else? (I'm curious about the one that someone here - I forget who - mentioned, where IIRC they thought it was a new one, not the UIUC one. I'm a little boggled that with so little use for one - there weren't many NCP sites - someone would have gone to all the work to do another one. But it's possible...) If you all used the UIUC one, what did you call it? I looked through the email about releases from NOSC (who seemed to be the focal point for maintaining it at one point): https://minnie.tuhs.org/cgi-bin/utree.pl?file=SRI-NOSC/new/dist.log but it doesn't give a name for it. The documentation: https://minnie.tuhs.org/cgi-bin/utree.pl?file=SRI-NOSC/doc/IllinoisNCP.nr doesn't either (other than 'Illinois NCP'). It does seem that they were semi-using V7; see e.g.: https://minnie.tuhs.org/cgi-bin/utree.pl?file=SRI-NOSC/green/c.c but the code looks very V6-ish; see e.g.: https://minnie.tuhs.org/cgi-bin/utree.pl?file=SRI-NOSC/ken/text.c Noel