* [TUHS] history of virtual address space
@ 2026-01-19 19:44 ron minnich via TUHS
2026-01-19 20:06 ` [TUHS] " brantley coile via TUHS
` (4 more replies)
0 siblings, 5 replies; 36+ messages in thread
From: ron minnich via TUHS @ 2026-01-19 19:44 UTC (permalink / raw)
To: The Eunuchs Hysterical Society
I was trying to remember how virtual address space evolved in Unix.
When I joined the game in v6 days, processes had virtual 0->0xffff, with
the potential for split I/D IIRC, getting you that bit of extra space. The
kernel also had virtual 0->0xffff, also with split I/D. The kernel
accessed user data with mov/load from previous address space instructions.
But also on the 11, trap vectors chewed up a chunk of low physical memory,
and the diode ROM took out some chunk of high physical. The bootstrap you
set from the front panel started with many 1's.
Later, when we had bigger address spaces on 32- bit systems, page 0 was
made unavailable in users address space to catch NULL pointers. Did that
happen on the VAX at some point, or later?
Later we got the "kernel is high virtual with bit 31 set" -- when did that
first happen?
At some point, the conventions solidify, in the sense that kernels have
high order bits all 1s, and user has page 0x200000 and up; and user mode
does not get any virtual address with high order bit set. My rough memory
is that this convention became most common in the 90s.
So, for the pre-mmu Unix, what addresses did processes get, as opposed to
the kernel? Did the kernel get loaded to, and run from, high physical? Did
processes get memory "above trap vectors" and below the kernel?
For the era of MMUs, I kmow the data general 32-bit systems put user memory
in "high virtual", which turned out to be really painful, as so many Unix
programs used address 0 (unknown to their authors!). What other Unix
systems used a layout where the kernel was in low virtual?
I'm not even sure where to find these answers, so apologies in advance for
the questions.
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-19 19:44 [TUHS] history of virtual address space ron minnich via TUHS
@ 2026-01-19 20:06 ` brantley coile via TUHS
2026-01-19 22:46 ` Ron Natalie via TUHS
` (3 subsequent siblings)
4 siblings, 0 replies; 36+ messages in thread
From: brantley coile via TUHS @ 2026-01-19 20:06 UTC (permalink / raw)
To: ron minnich; +Cc: The Eunuchs Hysterical Society
We got first edition Unix working some years ago. The user program was loaded at
orig = 0 .
core = orig+40000
That's octal so 16k.
I did a port of Seventh Edition where I put the kernel and user programs at zero. That was a mistake. That was in 1986. I then moved the user program to 4K and the kernel to 0x80000000. I got that from BSD. Don't know if it was the first.
Maybe you could peep at 32V, but I'm sure it loads the kernel at 1<31. It's how the VAX hardware works, if I'm not mistaken.
https://www.tuhs.org/Archive/Distributions/Research/Dennis_v1/PreliminaryUnixImplementationDocument_Jun72.pdf
bwc
> On Jan 19, 2026, at 2:44 PM, ron minnich via TUHS <tuhs@tuhs.org> wrote:
>
> I was trying to remember how virtual address space evolved in Unix.
>
> When I joined the game in v6 days, processes had virtual 0->0xffff, with
> the potential for split I/D IIRC, getting you that bit of extra space. The
> kernel also had virtual 0->0xffff, also with split I/D. The kernel
> accessed user data with mov/load from previous address space instructions.
> But also on the 11, trap vectors chewed up a chunk of low physical memory,
> and the diode ROM took out some chunk of high physical. The bootstrap you
> set from the front panel started with many 1's.
>
> Later, when we had bigger address spaces on 32- bit systems, page 0 was
> made unavailable in users address space to catch NULL pointers. Did that
> happen on the VAX at some point, or later?
>
> Later we got the "kernel is high virtual with bit 31 set" -- when did that
> first happen?
> At some point, the conventions solidify, in the sense that kernels have
> high order bits all 1s, and user has page 0x200000 and up; and user mode
> does not get any virtual address with high order bit set. My rough memory
> is that this convention became most common in the 90s.
>
> So, for the pre-mmu Unix, what addresses did processes get, as opposed to
> the kernel? Did the kernel get loaded to, and run from, high physical? Did
> processes get memory "above trap vectors" and below the kernel?
>
> For the era of MMUs, I kmow the data general 32-bit systems put user memory
> in "high virtual", which turned out to be really painful, as so many Unix
> programs used address 0 (unknown to their authors!). What other Unix
> systems used a layout where the kernel was in low virtual?
>
> I'm not even sure where to find these answers, so apologies in advance for
> the questions.
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-19 19:44 [TUHS] history of virtual address space ron minnich via TUHS
2026-01-19 20:06 ` [TUHS] " brantley coile via TUHS
@ 2026-01-19 22:46 ` Ron Natalie via TUHS
2026-01-19 23:10 ` Luther Johnson via TUHS
2026-01-20 0:05 ` Warner Losh via TUHS
` (2 subsequent siblings)
4 siblings, 1 reply; 36+ messages in thread
From: Ron Natalie via TUHS @ 2026-01-19 22:46 UTC (permalink / raw)
To: ron minnich, The Eunuchs Hysterical Society
The original UNIX PDP-11 address space (and this remained constant
throughout the years) started at location 0 and went up by 64 byte
chunks. This was done in seven of the eight segments the PDP-11
hardware did. The stack started in the last one and went downward
(again in 64 byte chunks) from the top of the 16-bit address space.
If you were running in 407 mode, it was all the same as far as code and
data. All writable.
In 410 mode (pure text), the segments containing the code were write
protected against the user (this also allowed them to be shared between
instances of the program and even locked in swap with the sticky bit).
A 411 (on hardware that supported it) split the code and data segments
into separate segments. You couldn’t even access the code as data on
the user side (thwarting the nargs function).
The kernel ran mostly mapped to physical address. However, the user
structure of the current running process was kept in the high segment.
When we went to Vax, pretty much the same idea was kept (programs
started at location zero and grew up). Some moron decided not only to
start at zero, but put a 0 at location 0 which led to lots of latent
bugs. Later when we got the Gould SELs in we had page zero unmapped
to catch null pointer access, but some programs we didn’t have source to
(notably Oracle) still expected to access *0 with impunity, so we added
a flag we could poke into the header called “braindead vax compatibility
mode” that mapped a zero at zero for these programs.
Things got even stranger when we ported to the HEP which had four kinds
of memory
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-19 22:46 ` Ron Natalie via TUHS
@ 2026-01-19 23:10 ` Luther Johnson via TUHS
2026-01-20 0:48 ` Alan Coopersmith via TUHS
0 siblings, 1 reply; 36+ messages in thread
From: Luther Johnson via TUHS @ 2026-01-19 23:10 UTC (permalink / raw)
To: tuhs
I once found an old version of RCS that expected *0 = 0, and I think
many other programs did as well. I'm not sure which versions of Unix did
or still do this any more, but I believe many had the first page in
virtual address space for user program processes, mapped to a read-only
page of zeroes, to support this. If I'm remembering that wrong, I'm
interested as the other posters here are, in the details of which did
which sorts of things.
On 01/19/2026 03:46 PM, Ron Natalie via TUHS wrote:
> The original UNIX PDP-11 address space (and this remained constant
> throughout the years) started at location 0 and went up by 64 byte
> chunks. This was done in seven of the eight segments the PDP-11
> hardware did. The stack started in the last one and went downward
> (again in 64 byte chunks) from the top of the 16-bit address space.
> If you were running in 407 mode, it was all the same as far as code
> and data. All writable.
> In 410 mode (pure text), the segments containing the code were write
> protected against the user (this also allowed them to be shared
> between instances of the program and even locked in swap with the
> sticky bit). A 411 (on hardware that supported it) split the code
> and data segments into separate segments. You couldn’t even access
> the code as data on the user side (thwarting the nargs function).
>
> The kernel ran mostly mapped to physical address. However, the user
> structure of the current running process was kept in the high segment.
>
> When we went to Vax, pretty much the same idea was kept (programs
> started at location zero and grew up). Some moron decided not only
> to start at zero, but put a 0 at location 0 which led to lots of
> latent bugs. Later when we got the Gould SELs in we had page zero
> unmapped to catch null pointer access, but some programs we didn’t
> have source to (notably Oracle) still expected to access *0 with
> impunity, so we added a flag we could poke into the header called
> “braindead vax compatibility mode” that mapped a zero at zero for
> these programs.
>
> Things got even stranger when we ported to the HEP which had four
> kinds of memory
>
>
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-19 19:44 [TUHS] history of virtual address space ron minnich via TUHS
2026-01-19 20:06 ` [TUHS] " brantley coile via TUHS
2026-01-19 22:46 ` Ron Natalie via TUHS
@ 2026-01-20 0:05 ` Warner Losh via TUHS
2026-01-20 16:49 ` Rich Salz via TUHS
2026-01-20 23:07 ` Clem Cole via TUHS
2026-01-20 2:22 ` John Levine via TUHS
2026-01-20 19:47 ` [TUHS] Unix use of VAX protection modes Paul Winalski via TUHS
4 siblings, 2 replies; 36+ messages in thread
From: Warner Losh via TUHS @ 2026-01-20 0:05 UTC (permalink / raw)
To: ron minnich; +Cc: The Eunuchs Hysterical Society
On Mon, Jan 19, 2026 at 12:45 PM ron minnich via TUHS <tuhs@tuhs.org> wrote:
> I was trying to remember how virtual address space evolved in Unix.
>
> When I joined the game in v6 days, processes had virtual 0->0xffff, with
> the potential for split I/D IIRC, getting you that bit of extra space. The
> kernel also had virtual 0->0xffff, also with split I/D. The kernel
> accessed user data with mov/load from previous address space instructions.
> But also on the 11, trap vectors chewed up a chunk of low physical memory,
> and the diode ROM took out some chunk of high physical. The bootstrap you
> set from the front panel started with many 1's.
>
> Later, when we had bigger address spaces on 32- bit systems, page 0 was
> made unavailable in users address space to catch NULL pointers. Did that
> happen on the VAX at some point, or later?
>
4.2BSD on the Vax dereferencing 0 would seg fault. I'm not sure when this
started. I am sure that System V or 32V on the Vax dereferencing 0 gave
0. There were many USENET threads about this in the mid 80s. I have first
hand experience with 4.2BSD, but not the others.
> Later we got the "kernel is high virtual with bit 31 set" -- when did that
> first happen?
> At some point, the conventions solidify, in the sense that kernels have
> high order bits all 1s, and user has page 0x200000 and up; and user mode
> does not get any virtual address with high order bit set. My rough memory
> is that this convention became most common in the 90s.
>
I think that's a 32-bit thing as well. Part of it was driven by
architecture specific
concerns. BSD has linked its kernels at 0xc0000000 since I think 4BSD (I
didn't
check). The space between 0x80000000 and 0xbfffffff sometimes was kernel,
sometimes
user depending on the architecture and kernel implementation. FreeBSD does
4G/4G for 32-bit kernels for PowerPC and i386 (though it used to have a
3G/1G split
for i386). 32-bit arm, though, is still just 3G/1G for user/kernel split.
IIRC, there were some
that had a 2G/2G split, but they escape me at the moment (which is the
classic 'high bit is
kernel).
64-bit architectures make it super easy to have lots of upper bits reserved
for this
or that, and is beyond the scope of this post.
> So, for the pre-mmu Unix, what addresses did processes get, as opposed to
> the kernel? Did the kernel get loaded to, and run from, high physical? Did
> processes get memory "above trap vectors" and below the kernel?
>
16-bit Venix loaded the programs at CS:0/DS:0 but CS and DS specified
segments
that were well above the hardware interrupt vectors at the bottom and the
ROM
reset vectors at the top. 16-bit Venix expected programs not to be naughty
and set
their own DS/ES, but many did to ill effect (since the first trap would set
them back
on return).
> For the era of MMUs, I kmow the data general 32-bit systems put user memory
> in "high virtual", which turned out to be really painful, as so many Unix
> programs used address 0 (unknown to their authors!). What other Unix
> systems used a layout where the kernel was in low virtual?
>
I've not seen any, but that doesn't mean they don't exist.
Warner
> I'm not even sure where to find these answers, so apologies in advance for
> the questions.
>
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-19 23:10 ` Luther Johnson via TUHS
@ 2026-01-20 0:48 ` Alan Coopersmith via TUHS
0 siblings, 0 replies; 36+ messages in thread
From: Alan Coopersmith via TUHS @ 2026-01-20 0:48 UTC (permalink / raw)
To: tuhs
On 1/19/26 15:10, Luther Johnson via TUHS wrote:
> I once found an old version of RCS that expected *0 = 0, and I think many other
> programs did as well. I'm not sure which versions of Unix did or still do this
> any more, but I believe many had the first page in virtual address space for
> user program processes, mapped to a read-only page of zeroes, to support this.
> If I'm remembering that wrong, I'm interested as the other posters here are, in
> the details of which did which sorts of things.
Deep in the SunOS SCCS archives, the initial checkin of the string(3) man page,
labeled as:
D 1.1 83/08/30 15:57:14 wnj 1 0 00103/00000/00000
date and time created 83/08/30 15:57:14 by wnj
contains this warning in the "BUGS" section:
On the Sun processor (and on some other machines), you can NOT use a
zero pointer to indicate a null string. A zero pointer is an error and
results in an abort of the program. If you wish to indicate a null
string, you must have a pointer that points to an explicit null string.
On PDP-11's and VAX'en, a source pointer of zero (0) can generally be
used to indicate a null string. Programmers using NULL to represent an
empty string should be aware of this portability issue.
This warning was apparently not sufficient, since in 1994, Bill Shannon added
the "0@0.so.1" object to Solaris, which you could use with LD_PRELOAD to mmap
a page from /dev/zero to the page at address 0, as described in the ld.so.1(1)
man page:
The user compatibility library /usr/lib/0@0.so.1 provides a mechanism
that establishes a value of 0 at location 0. Some applications exist
that erroneously assume a null character pointer should be treated the
same as a pointer to a null string. A segmentation violation occurs in
these applications when a null character pointer is accessed. If this
library is added to such an application at runtime using LD_PRELOAD,
the library provides an environment that is sympathetic to this errant
behavior. However, the user compatibility library is intended neither
to enable the generation of such applications, nor to endorse this par-
ticular programming practice.
In many cases, the presence of /usr/lib/0@0.so.1 is benign, and it can
be preloaded into programs that do not require it. However, there are
exceptions. Some applications, such as the JVM (Java Virtual Machine),
require that a segmentation violation be generated from a null pointer
access. Applications such as the JVM should not preload
/usr/lib/0@0.so.
There are more lines of comments than code in the source for it:
https://github.com/illumos/illumos-gate/blob/master/usr/src/cmd/sgs/0%400/common/0%400.c
--
-Alan Coopersmith- alan.coopersmith@oracle.com
Oracle Solaris Engineering - https://blogs.oracle.com/solaris
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-19 19:44 [TUHS] history of virtual address space ron minnich via TUHS
` (2 preceding siblings ...)
2026-01-20 0:05 ` Warner Losh via TUHS
@ 2026-01-20 2:22 ` John Levine via TUHS
2026-01-20 15:45 ` Paul Winalski via TUHS
2026-01-20 19:47 ` [TUHS] Unix use of VAX protection modes Paul Winalski via TUHS
4 siblings, 1 reply; 36+ messages in thread
From: John Levine via TUHS @ 2026-01-20 2:22 UTC (permalink / raw)
To: tuhs
It appears that ron minnich via TUHS <rminnich@gmail.com> said:
>I was trying to remember how virtual address space evolved in Unix.
>
>When I joined the game in v6 days, processes had virtual 0->0xffff, with
>the potential for split I/D IIRC, getting you that bit of extra space. The
>kernel also had virtual 0->0xffff, also with split I/D. The kernel
> accessed user data with mov/load from previous address space instructions.
>But also on the 11, trap vectors chewed up a chunk of low physical memory,
>and the diode ROM took out some chunk of high physical. The bootstrap you
>set from the front panel started with many 1's.
>
>Later, when we had bigger address spaces on 32- bit systems, page 0 was
>made unavailable in users address space to catch NULL pointers. Did that
>happen on the VAX at some point, or later?
Someone said in 4.2BSD which seems right. I fixed my share of code that
assumed that *0 == 0
>Later we got the "kernel is high virtual with bit 31 set" -- when did that
>first happen?
That's how VAX addressing worked. User programs were in the low half
of the address space, the kernel in the high half.
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-20 2:22 ` John Levine via TUHS
@ 2026-01-20 15:45 ` Paul Winalski via TUHS
2026-01-20 18:24 ` John R Levine via TUHS
2026-01-20 20:05 ` Paul Winalski via TUHS
0 siblings, 2 replies; 36+ messages in thread
From: Paul Winalski via TUHS @ 2026-01-20 15:45 UTC (permalink / raw)
To: John Levine; +Cc: tuhs
On Mon, Jan 19, 2026 at 9:22 PM John Levine via TUHS <tuhs@tuhs.org> wrote:
>
> >Later we got the "kernel is high virtual with bit 31 set" -- when did that
> >first happen?
>
> That's how VAX addressing worked. User programs were in the low half
> of the address space, the kernel in the high half.
>
It has nothing to do with how the VAX implements virtual addressing per
se. It's more to do with the way that the various a.out formats for
program images (OMAGIC, NMAGIC, ZMAGIC) all assume that program addressing
starts at 0. It offers the greatest convenience and flexibility if the
parts of kernel address space that must be mapped into user process address
space are at the high end of the address space, well out of the way of user
code and data.
VAX/VMS had the convention that all addresses with bit 31 on were reserved
to the operating system. And the first page of process address space
(0-512K) was both read- and write-protected so that any attempt to use a
null pointer or small integer as an address generated an exception. IMO
one of the best design decisions ever done in an ABI. It caught countless
program errors.
Unix seems to have followed VAX/VMS's lead on address space layout on the
VAX.
Where did Unix put the process stack on VAX? In VAX/VMS it was at the top
of the user VA space and grew downward. Contrariwise the top of heap
memory (the break in Unix-speak) started just after the statically
allocated part of the program image (including any mapped shared images)
and grew upwards.
-Paul W.
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-20 0:05 ` Warner Losh via TUHS
@ 2026-01-20 16:49 ` Rich Salz via TUHS
2026-01-20 18:18 ` Ron Natalie via TUHS
2026-01-20 19:38 ` Warner Losh via TUHS
2026-01-20 23:07 ` Clem Cole via TUHS
1 sibling, 2 replies; 36+ messages in thread
From: Rich Salz via TUHS @ 2026-01-20 16:49 UTC (permalink / raw)
To: Warner Losh; +Cc: The Eunuchs Hysterical Society
On Mon, Jan 19, 2026 at 7:05 PM Warner Losh via TUHS <tuhs@tuhs.org> wrote:
> 4.2BSD on the Vax dereferencing 0 would seg fault. I'm not sure when this
> started. I am sure that System V or 32V on the Vax dereferencing 0 gave
> 0.
Are you sure? My memory is the opposite. For example,
https://www.tuhs.org/Usenet/comp.bugs.4bsd/1989-July/003016.html
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-20 16:49 ` Rich Salz via TUHS
@ 2026-01-20 18:18 ` Ron Natalie via TUHS
2026-01-20 19:38 ` Warner Losh via TUHS
1 sibling, 0 replies; 36+ messages in thread
From: Ron Natalie via TUHS @ 2026-01-20 18:18 UTC (permalink / raw)
To: Rich Salz, Warner Losh; +Cc: The Eunuchs Hysterical Society
Indeed, 4BSD had a zero located at location zero.
Amusingly the PDP-11 had code down there. You knew you were
dereferencing zero when “p&P6” showed up in your output. This is what
was left printable in the first few instructions of crt0.o (a setd
instruction followed by others).
------ Original Message ------
From "Rich Salz via TUHS" <tuhs@tuhs.org>
To "Warner Losh" <imp@bsdimp.com>
Cc "The Eunuchs Hysterical Society" <tuhs@tuhs.org>
Date 1/20/2026 11:49:25 AM
Subject [TUHS] Re: history of virtual address space
>On Mon, Jan 19, 2026 at 7:05 PM Warner Losh via TUHS <tuhs@tuhs.org> wrote:
>
>> 4.2BSD on the Vax dereferencing 0 would seg fault. I'm not sure when this
>> started. I am sure that System V or 32V on the Vax dereferencing 0 gave
>> 0.
>
>
>Are you sure? My memory is the opposite. For example,
>https://www.tuhs.org/Usenet/comp.bugs.4bsd/1989-July/003016.html
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-20 15:45 ` Paul Winalski via TUHS
@ 2026-01-20 18:24 ` John R Levine via TUHS
2026-01-20 18:59 ` Paul Winalski via TUHS
2026-01-20 20:05 ` Paul Winalski via TUHS
1 sibling, 1 reply; 36+ messages in thread
From: John R Levine via TUHS @ 2026-01-20 18:24 UTC (permalink / raw)
To: Paul Winalski; +Cc: tuhs
> On Mon, Jan 19, 2026 at 9:22 PM John Levine via TUHS <tuhs@tuhs.org> wrote:
>
>>
>>> Later we got the "kernel is high virtual with bit 31 set" -- when did that
>>> first happen?
>>
>> That's how VAX addressing worked. User programs were in the low half
>> of the address space, the kernel in the high half.
>>
>
> It has nothing to do with how the VAX implements virtual addressing per
> se. ...
The 1981 VAX Architecture Handbook that I have in my hand disagrees with
you. The Memory Management chapter says that the top half of the address
space is the System (S) while the bottom half is Process. Process is
further divided into P0 which grows up from 0 and P1 which grows down from
the middle. There is one S page table which is in real memory. The P0
and P1 page tables are in S address space, giving the effect of two-level
page tables. Each page table is logically contiguous, with a base and
length register for each.
The a.out program layout is the only one that makes sense on a Vax, code
starts at zero, stack grows down from 2GB.
R's,
John
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-20 18:24 ` John R Levine via TUHS
@ 2026-01-20 18:59 ` Paul Winalski via TUHS
2026-01-20 19:03 ` John R Levine via TUHS
0 siblings, 1 reply; 36+ messages in thread
From: Paul Winalski via TUHS @ 2026-01-20 18:59 UTC (permalink / raw)
To: John R Levine; +Cc: tuhs
On Tue, Jan 20, 2026 at 1:24 PM John R Levine <johnl@taugh.com> wrote:
> > On Mon, Jan 19, 2026 at 9:22 PM John Levine via TUHS <tuhs@tuhs.org>
> wrote:
> >
> >>
> >>> Later we got the "kernel is high virtual with bit 31 set" -- when did
> that
> >>> first happen?
> >>
> >> That's how VAX addressing worked. User programs were in the low half
> >> of the address space, the kernel in the high half.
> >>
> >
> > It has nothing to do with how the VAX implements virtual addressing per
> > se. ...
>
> The 1981 VAX Architecture Handbook that I have in my hand disagrees with
> you. The Memory Management chapter says that the top half of the address
> space is the System (S) while the bottom half is Process. Process is
> further divided into P0 which grows up from 0 and P1 which grows down from
> the middle. There is one S page table which is in real memory. The P0
> and P1 page tables are in S address space, giving the effect of two-level
> page tables. Each page table is logically contiguous, with a base and
> length register for each.
>
> The a.out program layout is the only one that makes sense on a Vax, code
> starts at zero, stack grows down from 2GB.
>
> I am very familiar with the 1981 VAX Architecture Handbook. It is a
somewhat misleading document in that it is describing both how the VAX
hardware works (this is what I would call the "VAX Architecture") but also
the Application Binary Interface (ABI) of the VAX/VMS operating system.
Remember that at the time DEC was adopting a "one architecture, one
operating system" approach on VAX. They were trying to intimately link VAX
(the hardware architecture) and VMS (the operating system) in the minds of
both DEC's own engineers and their customers.
The Memory Management chapter is describing the memory layout of a VAX/VMS
process. If you look at the hardware architecture side of things, you will
see that there is absolutely no requirement or constraint at the hardware
level to have all of the virtual addresses with the high bit on be reserved
to the OS and those with the high bit off to user space. Each page table
entry has its own protection bits that describe the read and write
permissions for each execution mode (user/supervisor/executive/kernel).
Bottom line is, back in the early days of VAX, when DEC referred to "VAX
Architecture" they meant both the hardware architecture of VAX and the OS
architecture of VMS. Contrast this with "PDP-11 Architecture", which meant
only the hardware design, not the veritable zoo of operating systems that
ran on the PDP-11, each of which had its own peculiarities regarding memory
addressing.
One could have, if one wanted, done the opposite and put user address space
in the address range with the high bit on and the kernel address space in
the low part of the address space. It would be fully conformant with the
VAX page table layout and supported by all then-extant (11/780/750/730) and
future VAXen. Unix chose to follow the VMS design (as documented in the
VAX Architecture Manual) pretty much because there was no reason not to.
-Paul W.
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-20 18:59 ` Paul Winalski via TUHS
@ 2026-01-20 19:03 ` John R Levine via TUHS
2026-01-20 19:58 ` Phil Budne via TUHS
0 siblings, 1 reply; 36+ messages in thread
From: John R Levine via TUHS @ 2026-01-20 19:03 UTC (permalink / raw)
To: Paul Winalski; +Cc: tuhs
On Tue, 20 Jan 2026, Paul Winalski wrote:
> The Memory Management chapter is describing the memory layout of a VAX/VMS
> process. If you look at the hardware architecture side of things, you will
> see that there is absolutely no requirement or constraint at the hardware
> level to have all of the virtual addresses with the high bit on be reserved
> to the OS and those with the high bit off to user space. Each page table
> entry has its own protection bits that describe the read and write
> permissions for each execution mode (user/supervisor/executive/kernel).
Did you miss the bit about the S page tables being in real memory and the
P0 and P1 in S memory, with P1 growing down from 2GB? Really, the
hardware dictates the layout. You might want to find a copy of the
handbook and refresh your memory.
> One could have, if one wanted, done the opposite and put user address space
> in the address range with the high bit on and the kernel address space in
> the low part of the address space. It would be fully conformant with the
> VAX page table layout and supported by all then-extant (11/780/750/730) and
> future VAXen.
Nope. Please reead the handbook.
Regards,
John Levine, johnl@taugh.com, Taughannock Networks, Trumansburg NY
Please consider the environment before reading this e-mail. https://jl.ly
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-20 16:49 ` Rich Salz via TUHS
2026-01-20 18:18 ` Ron Natalie via TUHS
@ 2026-01-20 19:38 ` Warner Losh via TUHS
2026-01-20 21:51 ` Warner Losh via TUHS
1 sibling, 1 reply; 36+ messages in thread
From: Warner Losh via TUHS @ 2026-01-20 19:38 UTC (permalink / raw)
To: Rich Salz; +Cc: The Eunuchs Hysterical Society
On Tue, Jan 20, 2026, 9:49 AM Rich Salz <rich.salz@gmail.com> wrote:
>
>
> On Mon, Jan 19, 2026 at 7:05 PM Warner Losh via TUHS <tuhs@tuhs.org>
> wrote:
>
>> 4.2BSD on the Vax dereferencing 0 would seg fault. I'm not sure when this
>> started. I am sure that System V or 32V on the Vax dereferencing 0 gave
>> 0.
>
>
> Are you sure? My memory is the opposite. For example,
> https://www.tuhs.org/Usenet/comp.bugs.4bsd/1989-July/003016.html
>
Maybe it was a local hack. I know for sure that the vax assembler i wrote
definitely core dump when i did *NULL... it may have been 4.3 by then... I
also recall a prof or TA saying that 4.1BSD burned a lot of time on
infinite loops that students wrote, but now they are core dumps... But I've
not looked at the 4.3 sources to see if that's unmapped.
Warner
>
>
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Unix use of VAX protection modes
2026-01-19 19:44 [TUHS] history of virtual address space ron minnich via TUHS
` (3 preceding siblings ...)
2026-01-20 2:22 ` John Levine via TUHS
@ 2026-01-20 19:47 ` Paul Winalski via TUHS
2026-01-20 22:07 ` [TUHS] " Warner Losh via TUHS
` (2 more replies)
4 siblings, 3 replies; 36+ messages in thread
From: Paul Winalski via TUHS @ 2026-01-20 19:47 UTC (permalink / raw)
To: The Eunuchs Hysterical Society
The discussion of the history of virtual addressing in Unix called to my
mind a related question:
The VAX has four protection modes, from lowest to highest in privilege:
user, supervisor, executive, kernel. Access to certain instructions and
hardware data structures can only be done from kernel mode. User mode is
of course intended for use by non-privileged code and data. The VAX/VMS
Digital Command Language (DCL) command interpreter ran in supervisor mode.
Record Management Services (RMS, the primary way to access disk files) ran
in executive mode. The kernel, device drivers, hardware interrupt
handlers, etc. ran in kernel mode.
Obviously Unix user-mode process code and data ran in user mode. Likewise
the parts of the kernel that had to access privileged bits of the hardware
(such as device drivers) must have run in kernel mode.
My question is, did Unix make any use of either supervisor or executive
mode on the VAX?
-Paul W.
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-20 19:03 ` John R Levine via TUHS
@ 2026-01-20 19:58 ` Phil Budne via TUHS
2026-01-20 20:59 ` brantley coile via TUHS
0 siblings, 1 reply; 36+ messages in thread
From: Phil Budne via TUHS @ 2026-01-20 19:58 UTC (permalink / raw)
To: tuhs
The "feature" of having user and system use non-overlapping virtual
address ranges makes reading user data (eg system call arguments) easy (*)
The VAX reserving a full half of the address space for the system
software has always struck me of typical of the mindset of
kernel/monitor developers thru the ages: Cycles expended outside of
the system state (ie; by users) are wasted(**)!!
(*) PDP-10's with paging systems all implemented special a "previous
context" flavor of the XCT (execute) instruction for moving data
between system and user spaces (systems with more physical memory than
the full/original 256K word 18-bit virtual address space were common
from the start.
(**) I first noticed it at DEC in the 80's; TOPS-20 monitor developers
hating on EMACS users wasting time by context switching on each
keystroke.
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-20 15:45 ` Paul Winalski via TUHS
2026-01-20 18:24 ` John R Levine via TUHS
@ 2026-01-20 20:05 ` Paul Winalski via TUHS
1 sibling, 0 replies; 36+ messages in thread
From: Paul Winalski via TUHS @ 2026-01-20 20:05 UTC (permalink / raw)
To: tuhs
On Tue, Jan 20, 2026 at 10:45 AM Paul Winalski <paul.winalski@gmail.com>
wrote:
>
> It has nothing to do with how the VAX implements virtual addressing per
> se. It's more to do with the way that the various a.out formats for
> program images (OMAGIC, NMAGIC, ZMAGIC) all assume that program addressing
> starts at 0. It offers the greatest convenience and flexibility if the
> parts of kernel address space that must be mapped into user process address
> space are at the high end of the address space, well out of the way of user
> code and data.
>
> I found and downloaded a copy of the 1981 VAX Architecture Handbook and it
turns out I was wrong about this. The page table design for VAX defines
three ranges of virtual addresses:
System (S): bit 31 of virtual address on
Program 0 (P0): bits 31 and 30 of virtual address off
Program 1 (P1): bit 31 of virtual address off, bit 30 on
Each page table has a pair of hardware registers defining the base and
length of the page table (SBR/SLR, P0BR/P0LR, P1BR, P1LR). The SBR address
is a physical address. The P0BR and P1BR addresses are virtual addresses
in system (bit 31 on) space.
This design facilitates a virtual address layout where there is a single
system address range occupying high virtual memory and per-process P0 and
P1 regions. P1 is intended to implement a stack that grows downwards from
virtual address 0x7fffffff. P0 is intended to implement code and heap
space growing upwards from 0x00000000. VAX/VMS set the protection on the
first page of memory to allow neither read nor write access to any of the
four protection modes, so that any attempt to use small integers in the
range 0-511 (decimal) caused a seg fault. This caught countless
programming errors.
So the virtual address layout of early Unix on the VAX was indeed dictated
by the page table mechanism of the VAX hardware. Thanks to John Levine for
correcting me on this.
-Paul W.
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-20 19:58 ` Phil Budne via TUHS
@ 2026-01-20 20:59 ` brantley coile via TUHS
0 siblings, 0 replies; 36+ messages in thread
From: brantley coile via TUHS @ 2026-01-20 20:59 UTC (permalink / raw)
To: Phil Budne; +Cc: tuhs
I don't know if system designers thought that every cycle executed outside of system state was wasted or not, but the idea of four billion bytes of RAM begin installed in the system seemed like a far away fantasy in 1978. The most one could ever cram into a 780 was 128 MB. I was working on a machine with 256 KB and it seemed quite large. The big machines (IBM 370/158 and CDC Cyber 70/74) only had 4 MB and 128 KB. (The 128KB were 60 bits, I grant you.)
Again, no opinion on designers attitudes towards pesky users.
Brantley
> On Jan 20, 2026, at 2:58 PM, Phil Budne via TUHS <tuhs@tuhs.org> wrote:
>
> The "feature" of having user and system use non-overlapping virtual
> address ranges makes reading user data (eg system call arguments) easy (*)
>
> The VAX reserving a full half of the address space for the system
> software has always struck me of typical of the mindset of
> kernel/monitor developers thru the ages: Cycles expended outside of
> the system state (ie; by users) are wasted(**)!!
>
> (*) PDP-10's with paging systems all implemented special a "previous
> context" flavor of the XCT (execute) instruction for moving data
> between system and user spaces (systems with more physical memory than
> the full/original 256K word 18-bit virtual address space were common
> from the start.
>
> (**) I first noticed it at DEC in the 80's; TOPS-20 monitor developers
> hating on EMACS users wasting time by context switching on each
> keystroke.
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-20 19:38 ` Warner Losh via TUHS
@ 2026-01-20 21:51 ` Warner Losh via TUHS
2026-01-20 22:37 ` Ron Natalie via TUHS
0 siblings, 1 reply; 36+ messages in thread
From: Warner Losh via TUHS @ 2026-01-20 21:51 UTC (permalink / raw)
To: Rich Salz; +Cc: The Eunuchs Hysterical Society
Looking at 4.2BSD sources, I don't see any offset for address 0. a.out
makes it kinda hard to unmap page 0. The compiler would have to put a start
address into the a.out header, and the 4 binaries I checked had an a_entry
of 0.. But it looks like only the stand alone programs use a_entry... But
that's hidden as this inside setregs() in kern_exec.c:
u.u_ar0[PC] = u.u_exdata.ux_entloc+2;
and that type puns to a.out's a_entry, so that's going to jump to 0.
4.3BSD spells it a little differently:
setregs(exdata.ex_exec.a_entry);
but has the same a_entry == 0. setregs moved to vax/machdep.c where it adds
2.
So maybe my memory is bad, maybe we had a local hack. I acn't find out for
sure, but stock 4.3BSD/vax necessarily maps page 0.
Warner
On Tue, Jan 20, 2026 at 12:38 PM Warner Losh <imp@bsdimp.com> wrote:
>
>
> On Tue, Jan 20, 2026, 9:49 AM Rich Salz <rich.salz@gmail.com> wrote:
>
>>
>>
>> On Mon, Jan 19, 2026 at 7:05 PM Warner Losh via TUHS <tuhs@tuhs.org>
>> wrote:
>>
>>> 4.2BSD on the Vax dereferencing 0 would seg fault. I'm not sure when this
>>> started. I am sure that System V or 32V on the Vax dereferencing 0 gave
>>> 0.
>>
>>
>> Are you sure? My memory is the opposite. For example,
>> https://www.tuhs.org/Usenet/comp.bugs.4bsd/1989-July/003016.html
>>
>
>
> Maybe it was a local hack. I know for sure that the vax assembler i wrote
> definitely core dump when i did *NULL... it may have been 4.3 by then... I
> also recall a prof or TA saying that 4.1BSD burned a lot of time on
> infinite loops that students wrote, but now they are core dumps... But I've
> not looked at the 4.3 sources to see if that's unmapped.
>
> Warner
>
>>
>>
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: Unix use of VAX protection modes
2026-01-20 19:47 ` [TUHS] Unix use of VAX protection modes Paul Winalski via TUHS
@ 2026-01-20 22:07 ` Warner Losh via TUHS
2026-01-20 23:10 ` Clem Cole via TUHS
2026-01-21 15:52 ` Warner Losh via TUHS
2 siblings, 0 replies; 36+ messages in thread
From: Warner Losh via TUHS @ 2026-01-20 22:07 UTC (permalink / raw)
To: Paul Winalski; +Cc: The Eunuchs Hysterical Society
On Tue, Jan 20, 2026 at 12:48 PM Paul Winalski via TUHS <tuhs@tuhs.org>
wrote:
> The discussion of the history of virtual addressing in Unix called to my
> mind a related question:
>
> The VAX has four protection modes, from lowest to highest in privilege:
> user, supervisor, executive, kernel. Access to certain instructions and
> hardware data structures can only be done from kernel mode. User mode is
> of course intended for use by non-privileged code and data. The VAX/VMS
> Digital Command Language (DCL) command interpreter ran in supervisor mode.
> Record Management Services (RMS, the primary way to access disk files) ran
> in executive mode. The kernel, device drivers, hardware interrupt
> handlers, etc. ran in kernel mode.
>
> Obviously Unix user-mode process code and data ran in user mode. Likewise
> the parts of the kernel that had to access privileged bits of the hardware
> (such as device drivers) must have run in kernel mode.
>
> My question is, did Unix make any use of either supervisor or executive
> mode on the VAX?
>
My 1987 OS class said it only used user and kernel mode.
Looking at the 4.3BSD kernel, it either sets both priv bits, or clears both
the bits.
I don't see where it would be using supervisor or executive mode, but this
was a super
fast grep for PSL manipulation
Warner
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-20 21:51 ` Warner Losh via TUHS
@ 2026-01-20 22:37 ` Ron Natalie via TUHS
0 siblings, 0 replies; 36+ messages in thread
From: Ron Natalie via TUHS @ 2026-01-20 22:37 UTC (permalink / raw)
To: Warner Losh, Rich Salz; +Cc: The Eunuchs Hysterical Society
There was a start address in the a.out header. This dates back to the
V6 days (or perhaps earlier).
The header was on the PDP-11:
Magic number
Text size
Data Size
BSS size
Symbol Table Syze
ENTRY LOCATION
and the last two 16 bit values were unused.
When 4BSD came to the vax, the new header was pretty much the same only
with longs instead of shorts.
------ Original Message ------
From "Warner Losh via TUHS" <tuhs@tuhs.org>
To "Rich Salz" <rich.salz@gmail.com>
Cc "The Eunuchs Hysterical Society" <tuhs@tuhs.org>
Date 1/20/2026 4:51:00 PM
Subject [TUHS] Re: history of virtual address space
>Looking at 4.2BSD sources, I don't see any offset for address 0. a.out
>makes it kinda hard to unmap page 0. The compiler would have to put a start
>address into the a.out header, and the 4 binaries I checked had an a_entry
>of 0.. But it looks like only the stand alone programs use a_entry... But
>that's hidden as this inside setregs() in kern_exec.c:
> u.u_ar0[PC] = u.u_exdata.ux_entloc+2;
>and that type puns to a.out's a_entry, so that's going to jump to 0.
>
>4.3BSD spells it a little differently:
> setregs(exdata.ex_exec.a_entry);
>but has the same a_entry == 0. setregs moved to vax/machdep.c where it adds
>2.
>
>So maybe my memory is bad, maybe we had a local hack. I acn't find out for
>sure, but stock 4.3BSD/vax necessarily maps page 0.
>
>Warner
>
>On Tue, Jan 20, 2026 at 12:38 PM Warner Losh <imp@bsdimp.com> wrote:
>
>>
>>
>> On Tue, Jan 20, 2026, 9:49 AM Rich Salz <rich.salz@gmail.com> wrote:
>>
>>>
>>>
>>> On Mon, Jan 19, 2026 at 7:05 PM Warner Losh via TUHS <tuhs@tuhs.org>
>>> wrote:
>>>
>>>> 4.2BSD on the Vax dereferencing 0 would seg fault. I'm not sure when this
>>>> started. I am sure that System V or 32V on the Vax dereferencing 0 gave
>>>> 0.
>>>
>>>
>>> Are you sure? My memory is the opposite. For example,
>>> https://www.tuhs.org/Usenet/comp.bugs.4bsd/1989-July/003016.html
>>>
>>
>>
>> Maybe it was a local hack. I know for sure that the vax assembler i wrote
>> definitely core dump when i did *NULL... it may have been 4.3 by then... I
>> also recall a prof or TA saying that 4.1BSD burned a lot of time on
>> infinite loops that students wrote, but now they are core dumps... But I've
>> not looked at the 4.3 sources to see if that's unmapped.
>>
>> Warner
>>
>>>
>>>
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-20 0:05 ` Warner Losh via TUHS
2026-01-20 16:49 ` Rich Salz via TUHS
@ 2026-01-20 23:07 ` Clem Cole via TUHS
1 sibling, 0 replies; 36+ messages in thread
From: Clem Cole via TUHS @ 2026-01-20 23:07 UTC (permalink / raw)
To: Warner Losh; +Cc: The Eunuchs Hysterical Society
below
On Mon, Jan 19, 2026 at 7:05 PM Warner Losh via TUHS <tuhs@tuhs.org> wrote:
>
> 4.2BSD on the Vax dereferencing 0 would seg fault.
As the default was an optional setup that allowed the previous(4.1)
behaviour.
> I'm not sure when this started.
4.2BSD was the first time it was the default.
> I am sure that System V or 32V on the Vax dereferencing 0 gave 0.
Yes, the original Vax kernel from BSD inherited that behaviour from 32/V -
more in a minute.
> There were many USENET threads about this in the mid 80s.
>
Exactly. IMO, the worst offender was the V7 Bourne shell implementation;
the way it managed its heap depended on it. By that time, ports of Unix
were available on other ISPs besides the PDP-11. And many people started
to trip over the problem.
At Berkeley, there was much debate, and everyone agreed it was a sin. In
fact its #2 in Henry's famous list of the ten commandments:
Thou shalt not follow the NULL pointer
<http://www.eskimo.com/~scs/C-faq/s5.html>, for chaos and madness await
thee at its end.
The feeling was if we protected page zero, then you could get a page fault,
and we could start fixing utilities. It was interesting when you found one
(I remember stumbling it one in the Rand MH mail system). But the problem
was that, by this time frame, a large body of C code had begun to emerge
that people found useful, but had this hidden flaw.
So, with one of the early pre-4.2 releases (4.1 A/B/C), it could be
configured as an option. I ran the CAD Vaxen with it turned on as soon as
we had a system that could support it. By the time of the full release,
4.2BSD had protected page 0.
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: Unix use of VAX protection modes
2026-01-20 19:47 ` [TUHS] Unix use of VAX protection modes Paul Winalski via TUHS
2026-01-20 22:07 ` [TUHS] " Warner Losh via TUHS
@ 2026-01-20 23:10 ` Clem Cole via TUHS
2026-01-21 2:08 ` Erik E. Fair via TUHS
2026-01-21 15:52 ` Warner Losh via TUHS
2 siblings, 1 reply; 36+ messages in thread
From: Clem Cole via TUHS @ 2026-01-20 23:10 UTC (permalink / raw)
To: Paul Winalski; +Cc: The Eunuchs Hysterical Society
Paul for the Vax, 32V, the Vax based BSD's, Ultrix32 all used kernel and
user. The original PDP-11's worked that way also. But with 2.11BSD, the
networking code was moved into the supervisor (11's only have 3 possible
modes). This was done for address space reasons. It's a nice piece of
work.
On Tue, Jan 20, 2026 at 2:48 PM Paul Winalski via TUHS <tuhs@tuhs.org>
wrote:
> The discussion of the history of virtual addressing in Unix called to my
> mind a related question:
>
> The VAX has four protection modes, from lowest to highest in privilege:
> user, supervisor, executive, kernel. Access to certain instructions and
> hardware data structures can only be done from kernel mode. User mode is
> of course intended for use by non-privileged code and data. The VAX/VMS
> Digital Command Language (DCL) command interpreter ran in supervisor mode.
> Record Management Services (RMS, the primary way to access disk files) ran
> in executive mode. The kernel, device drivers, hardware interrupt
> handlers, etc. ran in kernel mode.
>
> Obviously Unix user-mode process code and data ran in user mode. Likewise
> the parts of the kernel that had to access privileged bits of the hardware
> (such as device drivers) must have run in kernel mode.
>
> My question is, did Unix make any use of either supervisor or executive
> mode on the VAX?
>
> -Paul W.
>
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: Unix use of VAX protection modes
2026-01-20 23:10 ` Clem Cole via TUHS
@ 2026-01-21 2:08 ` Erik E. Fair via TUHS
2026-01-21 2:19 ` segaloco via TUHS
` (3 more replies)
0 siblings, 4 replies; 36+ messages in thread
From: Erik E. Fair via TUHS @ 2026-01-21 2:08 UTC (permalink / raw)
To: The Eunuchs Hysterical Society
With regard to systems which didn't have address 0 in their user process address space (and thus dereferencing address 0 resulted in SIGSEGV), the first Unix (7th Edition) on a mc68000 was that way: the Dual Systems 83/80 S-100 (IEEE-696) bus system in November 1981.
Jeff Schriebman did the port, and founded UniSoft Systems not very far away from Dual in West Berkeley. Dual was UniSoft's first customer. Others: Apple Computer (for the Lisa - the parallel port based "Profile" hard drive was a horror), and a host of (Intel) MultiBus-based systems: Fortune, Pixel, Plexus, WiCat, CoData ...
Plop a 5 MHz mc68000 alas without an mc68451 (or other) MMU on an S-100 card, and how to protect the kernel? Split the 24-bit address space in half: bottom (from 0) for the kernel, top half (0x800000 and above) for user processes, and simple logic to test processor mode vs high (most significant) bit in any memory reference: any access of kernel address space (address msb = 0) while the processor is in user mode generates a trap or interrupt, such that the kernel can deliver SIGSEGV to the offending process. Base system configuration: 128KB RAM, 64KB for kernel, 64KB for user, literally in two S-100 DRAM cards with appropriate base addresses set in their DIP switches.
Naturally, all user software that dereferenced address 0 failed, and there was a lot of that to be cleaned up. Also, some customers told us that our hardware was wrong (nope - logic error on the part of the programmers who assumed address 0 was accessible, i.e., (*NULL); we didn't sell you a DEC PDP-11 which started this whole mess).
Also, yeah, Bourne shell's ... interesting ... notions of memory management: SEG fault? Catch it, and sbrk(2)! Doesn't work on the mc68000 because it didn't keep enough state for restarting faulted instructions unlike the DEC PDP-11 (Clem, do you want to chime in here to describe MassComp's extravagant solution to that?). Asa Romberger recounted the tale of refactoring Bourne shell for mc68000 to me while I was working for Dual and visiting UniSoft relatively frequently. The mc68010 fixed that incomplete fault state bug, and made demand-paged virtual memory possible for mc68k-family-based systems.
Virtually addressed memory came with the mc68451 MMU (but that had other awful problems which is probably so many mc68k box system manufacturers rolled their own MMUs). Before that, IIRC, Dual's Unix was swap-based: had to run each user process loaded from 0x800000 ... so put the ones you're not running in the swap partition of the system disk. Slow. It was even possible to run Unix from 8-inch floppies, which was even slower, but I knew a guy who did for a while.
I still have a lot of hate for USL because I spent a year cleaning up their bugs in System III, plus it was missing basically all BSD enhancements for better usability (the worst was not having the BSD tty line discipline, and job control). Dual never fully released it - by the time we were ready, System V was out, and we had to do it all over again.
Erik
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: Unix use of VAX protection modes
2026-01-21 2:08 ` Erik E. Fair via TUHS
@ 2026-01-21 2:19 ` segaloco via TUHS
2026-01-21 2:48 ` Bakul Shah via TUHS
` (2 subsequent siblings)
3 siblings, 0 replies; 36+ messages in thread
From: segaloco via TUHS @ 2026-01-21 2:19 UTC (permalink / raw)
To: The Eunuchs Hysterical Society
On Tuesday, January 20th, 2026 at 18:08, Erik E. Fair via TUHS <tuhs@tuhs.org> wrote:
> With regard to systems which didn't have address 0 in their user process address space (and thus dereferencing address 0 resulted in SIGSEGV), the first Unix (7th Edition) on a mc68000 was that way: the Dual Systems 83/80 S-100 (IEEE-696) bus system in November 1981.
>
>
> Erik
Wouldn't all 68k execution at least until VM kicks in avoid the bottom of memory as that is where the exception vectors live? With no standard MMU, I'd assume each 68k implementation had some freedom in setting their VM characteristics relative to whatever MMU folks went with. Vector 0 (long) holds the initial stack pointer and vector 1 holds the initial PC. On non-UNIX 68k systems I've dealt with, this area is fed by ROM. and thus read-only. At most, 0 accesses would give you the high byte of the initial SP, likely either a 0 or 255 due to sign extension of the 24-bit vector.
Presumably VM layouts put this somewhere else, if mapped at all, and reassigned 0 to start at some hunk of RAM when VM was turned on in common 68k kernels?
This does raise the question in my mind of which MMU the ATTIS/USL 68k port expected, how they made that choice, and what implications that had for 68k boxen that wanted to license System V but had a MMU that wasn't supported (if you'd even get that far down the R&D pipeline without verifying your MMU is supported)?
- Matt G.
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: Unix use of VAX protection modes
2026-01-21 2:08 ` Erik E. Fair via TUHS
2026-01-21 2:19 ` segaloco via TUHS
@ 2026-01-21 2:48 ` Bakul Shah via TUHS
2026-01-21 14:51 ` Larry McVoy via TUHS
2026-01-21 22:32 ` Clem Cole via TUHS
3 siblings, 0 replies; 36+ messages in thread
From: Bakul Shah via TUHS @ 2026-01-21 2:48 UTC (permalink / raw)
To: Erik E. Fair; +Cc: The Eunuchs Hysterical Society
Fortune, a 68K based system, had a custom external MMU with 4 segments, built using MSI TTL chips. Recall that the orig 68K supported up to 16MB of memory (24 bit ptrs):
purpose base limit
text 0x000000 base+size
data 0x400000 base+size
extra 0xbfffff base-size (i.e. segment could grow downward)
stack 0xffffff base-size
IIRC the kernel was loaded a 0x0 physical address (or may be the ROM was there -- wherever the 68k started after being brought out from a reset state). Upto 4 256KB memory cards could be plugged in. To context switch you had to reload the segment registers with the physical addr and limits of the new process. copyin/copyout of syscall args were asm functions to deal with these segments.
The compiler generated function entry prolog code to access end of the needed local frame. If this was past the stack limit, the stack would be grown in the trap handler. IIRC this was one instruction that could be restarted safely?
The swap was needed only when the sum total of process space exceeded physmem - kernel space (256K to 1MB max on the original system).
Another neat hack: IO driver for an IO device was stored on the relevant IO card itself. Each IO slot was at diff. address and the system queried for the presence of a card. If one was found it had to follow the IO API we had defined to provide device name, what kind of device etc.etc.
> On Jan 20, 2026, at 6:08 PM, Erik E. Fair via TUHS <tuhs@tuhs.org> wrote:
>
> With regard to systems which didn't have address 0 in their user process address space (and thus dereferencing address 0 resulted in SIGSEGV), the first Unix (7th Edition) on a mc68000 was that way: the Dual Systems 83/80 S-100 (IEEE-696) bus system in November 1981.
>
> Jeff Schriebman did the port, and founded UniSoft Systems not very far away from Dual in West Berkeley. Dual was UniSoft's first customer. Others: Apple Computer (for the Lisa - the parallel port based "Profile" hard drive was a horror), and a host of (Intel) MultiBus-based systems: Fortune, Pixel, Plexus, WiCat, CoData ...
>
> Plop a 5 MHz mc68000 alas without an mc68451 (or other) MMU on an S-100 card, and how to protect the kernel? Split the 24-bit address space in half: bottom (from 0) for the kernel, top half (0x800000 and above) for user processes, and simple logic to test processor mode vs high (most significant) bit in any memory reference: any access of kernel address space (address msb 0) while the processor is in user mode generates a trap or interrupt, such that the kernel can deliver SIGSEGV to the offending process. Base system configuration: 128KB RAM, 64KB for kernel, 64KB for user, literally in two S-100 DRAM cards with appropriate base addresses set in their DIP switches.
>
> Naturally, all user software that dereferenced address 0 failed, and there was a lot of that to be cleaned up. Also, some customers told us that our hardware was wrong (nope - logic error on the part of the programmers who assumed address 0 was accessible, i.e., (*NULL); we didn't sell you a DEC PDP-11 which started this whole mess).
>
> Also, yeah, Bourne shell's ... interesting ... notions of memory management: SEG fault? Catch it, and sbrk(2)! Doesn't work on the mc68000 because it didn't keep enough state for restarting faulted instructions unlike the DEC PDP-11 (Clem, do you want to chime in here to describe MassComp's extravagant solution to that?). Asa Romberger recounted the tale of refactoring Bourne shell for mc68000 to me while I was working for Dual and visiting UniSoft relatively frequently. The mc68010 fixed that incomplete fault state bug, and made demand-paged virtual memory possible for mc68k-family-based systems.
>
> Virtually addressed memory came with the mc68451 MMU (but that had other awful problems which is probably so many mc68k box system manufacturers rolled their own MMUs). Before that, IIRC, Dual's Unix was swap-based: had to run each user process loaded from 0x800000 ... so put the ones you're not running in the swap partition of the system disk. Slow. It was even possible to run Unix from 8-inch floppies, which was even slower, but I knew a guy who did for a while.
>
> I still have a lot of hate for USL because I spent a year cleaning up their bugs in System III, plus it was missing basically all BSD enhancements for better usability (the worst was not having the BSD tty line discipline, and job control). Dual never fully released it - by the time we were ready, System V was out, and we had to do it all over again.
>
> Erik
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: Unix use of VAX protection modes
2026-01-21 2:08 ` Erik E. Fair via TUHS
2026-01-21 2:19 ` segaloco via TUHS
2026-01-21 2:48 ` Bakul Shah via TUHS
@ 2026-01-21 14:51 ` Larry McVoy via TUHS
2026-01-21 15:38 ` Rich Salz via TUHS
2026-01-21 22:32 ` Clem Cole via TUHS
3 siblings, 1 reply; 36+ messages in thread
From: Larry McVoy via TUHS @ 2026-01-21 14:51 UTC (permalink / raw)
To: Erik E. Fair; +Cc: The Eunuchs Hysterical Society
On Tue, Jan 20, 2026 at 06:08:09PM -0800, Erik E. Fair via TUHS wrote:
> Clem, do you want to chime in here to describe MassComp's extravagant
> solution to that?
I think I sort of know what they did. They ran 2 68k cpus in lock step,
with one some number of instructions behind. When the leading CPU
faulted, it stopped the other one before it ran the instruction that
faulted, the stopped one somehow was redirected to handle the fault.
I feel like Forest Baskett may have been involved in that idea, maybe
he did it first elsewhere? Anyone know?
Clem will chime in with the correct details.
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: Unix use of VAX protection modes
2026-01-21 14:51 ` Larry McVoy via TUHS
@ 2026-01-21 15:38 ` Rich Salz via TUHS
2026-01-21 15:47 ` Larry McVoy via TUHS
0 siblings, 1 reply; 36+ messages in thread
From: Rich Salz via TUHS @ 2026-01-21 15:38 UTC (permalink / raw)
To: Larry McVoy; +Cc: The Eunuchs Hysterical Society
On Wed, Jan 21, 2026 at 9:51 AM Larry McVoy via TUHS <tuhs@tuhs.org> wrote:
> I think I sort of know what they did. They ran 2 68k cpus in lock step,
> with one some number of instructions behind. When the leading CPU
> faulted, it stopped the other one before it ran the instruction that
> faulted, the stopped one somehow was redirected to handle the fault.
Apollo did something similar. If the first CPU faulted, the second would
halt it, page in the memory and restart the first one.
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: Unix use of VAX protection modes
2026-01-21 15:38 ` Rich Salz via TUHS
@ 2026-01-21 15:47 ` Larry McVoy via TUHS
2026-01-21 21:30 ` Jonathan Gray via TUHS
0 siblings, 1 reply; 36+ messages in thread
From: Larry McVoy via TUHS @ 2026-01-21 15:47 UTC (permalink / raw)
To: Rich Salz; +Cc: The Eunuchs Hysterical Society
On Wed, Jan 21, 2026 at 10:38:27AM -0500, Rich Salz wrote:
> On Wed, Jan 21, 2026 at 9:51???AM Larry McVoy via TUHS <tuhs@tuhs.org> wrote:
>
> > I think I sort of know what they did. They ran 2 68k cpus in lock step,
> > with one some number of instructions behind. When the leading CPU
> > faulted, it stopped the other one before it ran the instruction that
> > faulted, the stopped one somehow was redirected to handle the fault.
>
>
> Apollo did something similar. If the first CPU faulted, the second would
> halt it, page in the memory and restart the first one.
Yeah, I think it was a pretty standard trick, I just can't remember who came
up with it first. My aging memory thinks it was Forest but I may have made
that up.
--
---
Larry McVoy Retired to fishing http://www.mcvoy.com/lm/boat
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: Unix use of VAX protection modes
2026-01-20 19:47 ` [TUHS] Unix use of VAX protection modes Paul Winalski via TUHS
2026-01-20 22:07 ` [TUHS] " Warner Losh via TUHS
2026-01-20 23:10 ` Clem Cole via TUHS
@ 2026-01-21 15:52 ` Warner Losh via TUHS
2026-01-21 15:59 ` Warner Losh via TUHS
2 siblings, 1 reply; 36+ messages in thread
From: Warner Losh via TUHS @ 2026-01-21 15:52 UTC (permalink / raw)
To: Paul Winalski; +Cc: The Eunuchs Hysterical Society
On Tue, Jan 20, 2026 at 12:48 PM Paul Winalski via TUHS <tuhs@tuhs.org>
wrote:
> My question is, did Unix make any use of either supervisor or executive
> mode on the VAX?
>
I know that SGI did use these modes on the MIPS processor. But it was a bit
like
the 2.11BSD use: They created supervisor mode code for the X server that
anybody
could call, but mediated the access to the hardware to trusted code. They
did this as
part of their protocol optional interface to the hardware since 80% of the
time was chewed
up in protocol things, at least according to a presentation I saw / paper I
read about
it at one of the X technical conferences. They did this generically to
allow a richer interface
to devices than a driver could normally provide to userland, as well as
hopping on the
microkernel bandwagon. It was a good idea, though, since this was basically
just a process
that ran in a different processor mode, so it was easy to restart if it
crashed...
Warner
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: Unix use of VAX protection modes
2026-01-21 15:52 ` Warner Losh via TUHS
@ 2026-01-21 15:59 ` Warner Losh via TUHS
0 siblings, 0 replies; 36+ messages in thread
From: Warner Losh via TUHS @ 2026-01-21 15:59 UTC (permalink / raw)
To: Paul Winalski; +Cc: The Eunuchs Hysterical Society
On Wed, Jan 21, 2026 at 8:52 AM Warner Losh <imp@bsdimp.com> wrote:
>
>
> On Tue, Jan 20, 2026 at 12:48 PM Paul Winalski via TUHS <tuhs@tuhs.org>
> wrote:
>
>> My question is, did Unix make any use of either supervisor or executive
>> mode on the VAX?
>>
>
> I know that SGI did use these modes on the MIPS processor. But it was a
> bit like
> the 2.11BSD use: They created supervisor mode code for the X server that
> anybody
> could call, but mediated the access to the hardware to trusted code. They
> did this as
> part of their protocol optional interface to the hardware since 80% of the
> time was chewed
> up in protocol things, at least according to a presentation I saw / paper
> I read about
> it at one of the X technical conferences. They did this generically to
> allow a richer interface
> to devices than a driver could normally provide to userland, as well as
> hopping on the
> microkernel bandwagon. It was a good idea, though, since this was
> basically just a process
> that ran in a different processor mode, so it was easy to restart if it
> crashed...
>
The AI chat bots seem to think this paper was
"A High Performance X Server for a 64-bit Architecture," presented by
Silicon Graphics at the 8th X Technical Conference in 1994,
But I can't find it online... Anybody know a source?
Warner
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: Unix use of VAX protection modes
2026-01-21 15:47 ` Larry McVoy via TUHS
@ 2026-01-21 21:30 ` Jonathan Gray via TUHS
0 siblings, 0 replies; 36+ messages in thread
From: Jonathan Gray via TUHS @ 2026-01-21 21:30 UTC (permalink / raw)
To: Larry McVoy; +Cc: tuhs
On Wed, Jan 21, 2026 at 07:47:40AM -0800, Larry McVoy via TUHS wrote:
> On Wed, Jan 21, 2026 at 10:38:27AM -0500, Rich Salz wrote:
> > On Wed, Jan 21, 2026 at 9:51???AM Larry McVoy via TUHS <tuhs@tuhs.org> wrote:
> >
> > > I think I sort of know what they did. They ran 2 68k cpus in lock step,
> > > with one some number of instructions behind. When the leading CPU
> > > faulted, it stopped the other one before it ran the instruction that
> > > faulted, the stopped one somehow was redirected to handle the fault.
> >
> >
> > Apollo did something similar. If the first CPU faulted, the second would
> > halt it, page in the memory and restart the first one.
>
> Yeah, I think it was a pretty standard trick, I just can't remember who came
> up with it first. My aging memory thinks it was Forest but I may have made
> that up.
Clem mentioned a paper from Forest in
https://www.tuhs.org/pipermail/tuhs/2020-January/020130.html
Forest Baskett
Pascal and Virtual Memory in a Z8000 or MC68000 based Design Station
COMPCON 80, Digest of Papers, pp 456-459,
IEEE Computer Society, Feb. 25, 1980
Also referenced in
https://bitsavers.org/pdf/stanford/sun/The_SUN_Workstation_Draft_Mar80.pdf
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: Unix use of VAX protection modes
2026-01-21 2:08 ` Erik E. Fair via TUHS
` (2 preceding siblings ...)
2026-01-21 14:51 ` Larry McVoy via TUHS
@ 2026-01-21 22:32 ` Clem Cole via TUHS
2026-01-21 22:35 ` Clem Cole via TUHS
3 siblings, 1 reply; 36+ messages in thread
From: Clem Cole via TUHS @ 2026-01-21 22:32 UTC (permalink / raw)
To: Erik E. Fair; +Cc: The Eunuchs Hysterical Society
below
On Tue, Jan 20, 2026 at 9:08 PM Erik E. Fair via TUHS <tuhs@tuhs.org> wrote:
>
> Also, yeah, Bourne shell's ... interesting ... notions of memory
> management: SEG fault? Catch it, and sbrk(2)! Doesn't work on the mc68000
> because it didn't keep enough state for restarting faulted instructions
> unlike the DEC PDP-11 (Clem, do you want to chime in here to describe
> MassComp's extravagant solution to that?). Asa Romberger recounted the tale
> of refactoring Bourne shell for mc68000 to me while I was working for Dual
> and visiting UniSoft relatively frequently. The mc68010 fixed that
> incomplete fault state bug, and made demand-paged virtual memory possible
> for mc68k-family-based systems.
>
To be fair, the idea originated with Forest Baskett's Asilmar paper in
1980, if I remember the date correctly. Sadly, the paper appears to have
been lost to time, as far as I can tell. As Erik mentioned, the problem is
that the instruction needs to be restarted and allowed to complete — after
the VM code makes the page available, it restarts the processor with the
original instruction that previously failed.
Without getting into too many HW details, there were several types of
memory available, each with different characteristics for accessing them.
Also, the processors of that day used a synchronous bus -* i.e.*, the
processor put an address onto the memory bus and waited for the resulting
read or write before the instruction that created the address "complete."
[1] Since different memories have different timing characteristics, the
68000 has a "wait state" pin. Depending on how long it has been asserted,
it allows N clock cycles to go by while the processor waits for the memory
transaction to complete.
So the Masscomp MC500 CPU board had 2 68000s on it, called the "executor"
and the "fixer." The fixer did just that. It ran the code to fetch a new
page for a read or to create a copy of the page on a write, which was in
the address space assigned to the process executing on the processor. When
an executor "faulted" instead of returning an error, the hardware just sent
wait states to it. The fixer set the memory and released the wait pin on
the executor, which will complete the requested operation without faulting,
but at the expense of a very long time for that instruction to complete on
the executor.
When Motorola came out with the 68010, which did save enough "microcode"
state so the executor could fault and was free to do something else while
the fixer redid everything. We swapped out the executor and changed a
small amount of logic on the CPU board. The executor was free to do
something else while the fixed cared for the memory. When the fixer gave
the ok, the executor would pick up where it left off in the process that
originally hit the fault.
A "fun" (undocumented) feature was discovered by us soon after the 68010
showed up. Masscomp was the first UNIX vendor to build and sell an MP
machine (MC500/DP). But we ran into a field issue with the 68010's.
It turns out that different microcode was implemented in different "steps"
of the chip. And what was saved on the stack was different between them.
So the problem was, if a CPU board was made with a 68010 from batch A, and
its MP twin CPU board had a 68010 from batch B — thus having slightly
different microcode. The issue we discovered was that when the
68010-based executor on CPU card A faulted, the microcode state was stored
on the stack per Motorola's specs, but when we tried to restart the process
by allowing the previous faulting instruction to be retried, batch B's
68010 executor could not grok the microcode state that had been stored from
executor A (and vice versa).
After many complaints to our Motorola rep, to get them to make the
microstate saved identical even when they "stepped" the processor. In the
meantime, manufacturing and field service made the "field replaceable unit"
(FRU) a matched set of processor cards.
Clem
1] The PDP-11 worked this way, but starting with the Vax and copied by more
modern processors is the idea of a "split transaction" bus. where the
processor makes a request of the memory system, and is free to do something
else. The memory returns a bus transaction indicating whether data was
written or read, and the processor can retire the instructions. This is
really useful when you start to break up instructions in a pipeline.
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: Unix use of VAX protection modes
2026-01-21 22:32 ` Clem Cole via TUHS
@ 2026-01-21 22:35 ` Clem Cole via TUHS
2026-01-21 23:43 ` Tom Lyon via TUHS
0 siblings, 1 reply; 36+ messages in thread
From: Clem Cole via TUHS @ 2026-01-21 22:35 UTC (permalink / raw)
To: Erik E. Fair; +Cc: The Eunuchs Hysterical Society
Thanks, Jonathan - yes, it was COMPCON to Asilimar.
On Wed, Jan 21, 2026 at 5:32 PM Clem Cole <clemc@ccc.com> wrote:
> below
>
> On Tue, Jan 20, 2026 at 9:08 PM Erik E. Fair via TUHS <tuhs@tuhs.org>
> wrote:
>
>>
>> Also, yeah, Bourne shell's ... interesting ... notions of memory
>> management: SEG fault? Catch it, and sbrk(2)! Doesn't work on the mc68000
>> because it didn't keep enough state for restarting faulted instructions
>> unlike the DEC PDP-11 (Clem, do you want to chime in here to describe
>> MassComp's extravagant solution to that?). Asa Romberger recounted the tale
>> of refactoring Bourne shell for mc68000 to me while I was working for Dual
>> and visiting UniSoft relatively frequently. The mc68010 fixed that
>> incomplete fault state bug, and made demand-paged virtual memory possible
>> for mc68k-family-based systems.
>>
> To be fair, the idea originated with Forest Baskett's Asilmar paper in
> 1980, if I remember the date correctly. Sadly, the paper appears to have
> been lost to time, as far as I can tell. As Erik mentioned, the problem is
> that the instruction needs to be restarted and allowed to complete — after
> the VM code makes the page available, it restarts the processor with the
> original instruction that previously failed.
>
> Without getting into too many HW details, there were several types of
> memory available, each with different characteristics for accessing them.
> Also, the processors of that day used a synchronous bus -* i.e.*, the
> processor put an address onto the memory bus and waited for the resulting
> read or write before the instruction that created the address "complete."
> [1] Since different memories have different timing characteristics, the
> 68000 has a "wait state" pin. Depending on how long it has been asserted,
> it allows N clock cycles to go by while the processor waits for the memory
> transaction to complete.
>
> So the Masscomp MC500 CPU board had 2 68000s on it, called the "executor"
> and the "fixer." The fixer did just that. It ran the code to fetch a new
> page for a read or to create a copy of the page on a write, which was in
> the address space assigned to the process executing on the processor. When
> an executor "faulted" instead of returning an error, the hardware just sent
> wait states to it. The fixer set the memory and released the wait pin on
> the executor, which will complete the requested operation without faulting,
> but at the expense of a very long time for that instruction to complete on
> the executor.
>
> When Motorola came out with the 68010, which did save enough "microcode"
> state so the executor could fault and was free to do something else while
> the fixer redid everything. We swapped out the executor and changed a
> small amount of logic on the CPU board. The executor was free to do
> something else while the fixed cared for the memory. When the fixer gave
> the ok, the executor would pick up where it left off in the process that
> originally hit the fault.
>
> A "fun" (undocumented) feature was discovered by us soon after the 68010
> showed up. Masscomp was the first UNIX vendor to build and sell an MP
> machine (MC500/DP). But we ran into a field issue with the 68010's.
>
> It turns out that different microcode was implemented in different
> "steps" of the chip. And what was saved on the stack was different
> between them. So the problem was, if a CPU board was made with a 68010
> from batch A, and its MP twin CPU board had a 68010 from batch B — thus
> having slightly different microcode. The issue we discovered was that
> when the 68010-based executor on CPU card A faulted, the microcode state
> was stored on the stack per Motorola's specs, but when we tried to restart
> the process by allowing the previous faulting instruction to be retried,
> batch B's 68010 executor could not grok the microcode state that had been
> stored from executor A (and vice versa).
>
> After many complaints to our Motorola rep, to get them to make the
> microstate saved identical even when they "stepped" the processor. In the
> meantime, manufacturing and field service made the "field replaceable unit"
> (FRU) a matched set of processor cards.
>
> Clem
>
>
>
>
>
>
>
> 1] The PDP-11 worked this way, but starting with the Vax and copied by
> more modern processors is the idea of a "split transaction" bus. where the
> processor makes a request of the memory system, and is free to do something
> else. The memory returns a bus transaction indicating whether data was
> written or read, and the processor can retire the instructions. This is
> really useful when you start to break up instructions in a pipeline.
>
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: Unix use of VAX protection modes
2026-01-21 22:35 ` Clem Cole via TUHS
@ 2026-01-21 23:43 ` Tom Lyon via TUHS
2026-01-21 23:48 ` Clem Cole via TUHS
0 siblings, 1 reply; 36+ messages in thread
From: Tom Lyon via TUHS @ 2026-01-21 23:43 UTC (permalink / raw)
To: Clem Cole; +Cc: The Eunuchs Hysterical Society
The 68000 didn't have any special logic for inserting wait states; one
merely asserted DTACK when the data was actually ready.
Thus the cult-status newsletter "DTACK Grounded" -
https://en.wikipedia.org/wiki/DTACK_Grounded
On Wed, Jan 21, 2026 at 2:51 PM Clem Cole via TUHS <tuhs@tuhs.org> wrote:
> Thanks, Jonathan - yes, it was COMPCON to Asilimar.
>
> On Wed, Jan 21, 2026 at 5:32 PM Clem Cole <clemc@ccc.com> wrote:
>
> > below
> >
> > On Tue, Jan 20, 2026 at 9:08 PM Erik E. Fair via TUHS <tuhs@tuhs.org>
> > wrote:
> >
> >>
> >> Also, yeah, Bourne shell's ... interesting ... notions of memory
> >> management: SEG fault? Catch it, and sbrk(2)! Doesn't work on the
> mc68000
> >> because it didn't keep enough state for restarting faulted instructions
> >> unlike the DEC PDP-11 (Clem, do you want to chime in here to describe
> >> MassComp's extravagant solution to that?). Asa Romberger recounted the
> tale
> >> of refactoring Bourne shell for mc68000 to me while I was working for
> Dual
> >> and visiting UniSoft relatively frequently. The mc68010 fixed that
> >> incomplete fault state bug, and made demand-paged virtual memory
> possible
> >> for mc68k-family-based systems.
> >>
> > To be fair, the idea originated with Forest Baskett's Asilmar paper in
> > 1980, if I remember the date correctly. Sadly, the paper appears to have
> > been lost to time, as far as I can tell. As Erik mentioned, the problem
> is
> > that the instruction needs to be restarted and allowed to complete —
> after
> > the VM code makes the page available, it restarts the processor with the
> > original instruction that previously failed.
> >
> > Without getting into too many HW details, there were several types of
> > memory available, each with different characteristics for accessing them.
> > Also, the processors of that day used a synchronous bus -* i.e.*, the
> > processor put an address onto the memory bus and waited for the resulting
> > read or write before the instruction that created the address "complete."
> > [1] Since different memories have different timing characteristics, the
> > 68000 has a "wait state" pin. Depending on how long it has been asserted,
> > it allows N clock cycles to go by while the processor waits for the
> memory
> > transaction to complete.
> >
> > So the Masscomp MC500 CPU board had 2 68000s on it, called the "executor"
> > and the "fixer." The fixer did just that. It ran the code to fetch a
> new
> > page for a read or to create a copy of the page on a write, which was in
> > the address space assigned to the process executing on the processor.
> When
> > an executor "faulted" instead of returning an error, the hardware just
> sent
> > wait states to it. The fixer set the memory and released the wait pin on
> > the executor, which will complete the requested operation without
> faulting,
> > but at the expense of a very long time for that instruction to complete
> on
> > the executor.
> >
> > When Motorola came out with the 68010, which did save enough "microcode"
> > state so the executor could fault and was free to do something else while
> > the fixer redid everything. We swapped out the executor and changed a
> > small amount of logic on the CPU board. The executor was free to do
> > something else while the fixed cared for the memory. When the fixer gave
> > the ok, the executor would pick up where it left off in the process that
> > originally hit the fault.
> >
> > A "fun" (undocumented) feature was discovered by us soon after the 68010
> > showed up. Masscomp was the first UNIX vendor to build and sell an MP
> > machine (MC500/DP). But we ran into a field issue with the 68010's.
> >
> > It turns out that different microcode was implemented in different
> > "steps" of the chip. And what was saved on the stack was different
> > between them. So the problem was, if a CPU board was made with a 68010
> > from batch A, and its MP twin CPU board had a 68010 from batch B — thus
> > having slightly different microcode. The issue we discovered was that
> > when the 68010-based executor on CPU card A faulted, the microcode state
> > was stored on the stack per Motorola's specs, but when we tried to
> restart
> > the process by allowing the previous faulting instruction to be retried,
> > batch B's 68010 executor could not grok the microcode state that had been
> > stored from executor A (and vice versa).
> >
> > After many complaints to our Motorola rep, to get them to make the
> > microstate saved identical even when they "stepped" the processor. In
> the
> > meantime, manufacturing and field service made the "field replaceable
> unit"
> > (FRU) a matched set of processor cards.
> >
> > Clem
> >
> >
> >
> >
> >
> >
> >
> > 1] The PDP-11 worked this way, but starting with the Vax and copied by
> > more modern processors is the idea of a "split transaction" bus. where
> the
> > processor makes a request of the memory system, and is free to do
> something
> > else. The memory returns a bus transaction indicating whether data was
> > written or read, and the processor can retire the instructions. This is
> > really useful when you start to break up instructions in a pipeline.
> >
>
^ permalink raw reply [flat|nested] 36+ messages in thread
* [TUHS] Re: Unix use of VAX protection modes
2026-01-21 23:43 ` Tom Lyon via TUHS
@ 2026-01-21 23:48 ` Clem Cole via TUHS
0 siblings, 0 replies; 36+ messages in thread
From: Clem Cole via TUHS @ 2026-01-21 23:48 UTC (permalink / raw)
To: Tom Lyon; +Cc: The Eunuchs Hysterical Society
Ah yes. Thank you. I had forgotten, unlike some the other contemporary
chips, the 68000 just waits for the memory system to tell it to continue.
The faster the access time on your memory, the faster it could complete the
instructions.
Sent from a handheld expect more typos than usual
On Wed, Jan 21, 2026 at 6:44 PM Tom Lyon <pugs78@gmail.com> wrote:
> The 68000 didn't have any special logic for inserting wait states; one
> merely asserted DTACK when the data was actually ready.
> Thus the cult-status newsletter "DTACK Grounded" -
> https://en.wikipedia.org/wiki/DTACK_Grounded
>
> On Wed, Jan 21, 2026 at 2:51 PM Clem Cole via TUHS <tuhs@tuhs.org> wrote:
>
>> Thanks, Jonathan - yes, it was COMPCON to Asilimar.
>>
>> On Wed, Jan 21, 2026 at 5:32 PM Clem Cole <clemc@ccc.com> wrote:
>>
>> > below
>> >
>> > On Tue, Jan 20, 2026 at 9:08 PM Erik E. Fair via TUHS <tuhs@tuhs.org>
>> > wrote:
>> >
>> >>
>> >> Also, yeah, Bourne shell's ... interesting ... notions of memory
>> >> management: SEG fault? Catch it, and sbrk(2)! Doesn't work on the
>> mc68000
>> >> because it didn't keep enough state for restarting faulted instructions
>> >> unlike the DEC PDP-11 (Clem, do you want to chime in here to describe
>> >> MassComp's extravagant solution to that?). Asa Romberger recounted the
>> tale
>> >> of refactoring Bourne shell for mc68000 to me while I was working for
>> Dual
>> >> and visiting UniSoft relatively frequently. The mc68010 fixed that
>> >> incomplete fault state bug, and made demand-paged virtual memory
>> possible
>> >> for mc68k-family-based systems.
>> >>
>> > To be fair, the idea originated with Forest Baskett's Asilmar paper in
>> > 1980, if I remember the date correctly. Sadly, the paper appears to
>> have
>> > been lost to time, as far as I can tell. As Erik mentioned, the
>> problem is
>> > that the instruction needs to be restarted and allowed to complete —
>> after
>> > the VM code makes the page available, it restarts the processor with the
>> > original instruction that previously failed.
>> >
>> > Without getting into too many HW details, there were several types of
>> > memory available, each with different characteristics for accessing
>> them.
>> > Also, the processors of that day used a synchronous bus -* i.e.*, the
>
>
>> > processor put an address onto the memory bus and waited for the
>> resulting
>> > read or write before the instruction that created the address
>> "complete."
>> > [1] Since different memories have different timing characteristics, the
>> > 68000 has a "wait state" pin. Depending on how long it has been
>> asserted,
>> > it allows N clock cycles to go by while the processor waits for the
>> memory
>> > transaction to complete.
>> >
>> > So the Masscomp MC500 CPU board had 2 68000s on it, called the
>> "executor"
>> > and the "fixer." The fixer did just that. It ran the code to fetch a
>> new
>> > page for a read or to create a copy of the page on a write, which was in
>> > the address space assigned to the process executing on the processor.
>> When
>> > an executor "faulted" instead of returning an error, the hardware just
>> sent
>> > wait states to it. The fixer set the memory and released the wait pin
>> on
>> > the executor, which will complete the requested operation without
>> faulting,
>> > but at the expense of a very long time for that instruction to complete
>> on
>> > the executor.
>> >
>> > When Motorola came out with the 68010, which did save enough "microcode"
>> > state so the executor could fault and was free to do something else
>> while
>> > the fixer redid everything. We swapped out the executor and changed a
>> > small amount of logic on the CPU board. The executor was free to do
>> > something else while the fixed cared for the memory. When the fixer
>> gave
>> > the ok, the executor would pick up where it left off in the process that
>> > originally hit the fault.
>> >
>> > A "fun" (undocumented) feature was discovered by us soon after the 68010
>> > showed up. Masscomp was the first UNIX vendor to build and sell an MP
>> > machine (MC500/DP). But we ran into a field issue with the 68010's.
>> >
>> > It turns out that different microcode was implemented in different
>> > "steps" of the chip. And what was saved on the stack was different
>> > between them. So the problem was, if a CPU board was made with a 68010
>> > from batch A, and its MP twin CPU board had a 68010 from batch B — thus
>> > having slightly different microcode. The issue we discovered was that
>> > when the 68010-based executor on CPU card A faulted, the microcode state
>> > was stored on the stack per Motorola's specs, but when we tried to
>> restart
>> > the process by allowing the previous faulting instruction to be retried,
>> > batch B's 68010 executor could not grok the microcode state that had
>> been
>> > stored from executor A (and vice versa).
>> >
>> > After many complaints to our Motorola rep, to get them to make the
>> > microstate saved identical even when they "stepped" the processor. In
>> the
>> > meantime, manufacturing and field service made the "field replaceable
>> unit"
>> > (FRU) a matched set of processor cards.
>> >
>> > Clem
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > 1] The PDP-11 worked this way, but starting with the Vax and copied by
>> > more modern processors is the idea of a "split transaction" bus. where
>> the
>> > processor makes a request of the memory system, and is free to do
>> something
>> > else. The memory returns a bus transaction indicating whether data was
>> > written or read, and the processor can retire the instructions. This is
>> > really useful when you start to break up instructions in a pipeline.
>> >
>>
>
^ permalink raw reply [flat|nested] 36+ messages in thread
end of thread, other threads:[~2026-01-21 23:49 UTC | newest]
Thread overview: 36+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-19 19:44 [TUHS] history of virtual address space ron minnich via TUHS
2026-01-19 20:06 ` [TUHS] " brantley coile via TUHS
2026-01-19 22:46 ` Ron Natalie via TUHS
2026-01-19 23:10 ` Luther Johnson via TUHS
2026-01-20 0:48 ` Alan Coopersmith via TUHS
2026-01-20 0:05 ` Warner Losh via TUHS
2026-01-20 16:49 ` Rich Salz via TUHS
2026-01-20 18:18 ` Ron Natalie via TUHS
2026-01-20 19:38 ` Warner Losh via TUHS
2026-01-20 21:51 ` Warner Losh via TUHS
2026-01-20 22:37 ` Ron Natalie via TUHS
2026-01-20 23:07 ` Clem Cole via TUHS
2026-01-20 2:22 ` John Levine via TUHS
2026-01-20 15:45 ` Paul Winalski via TUHS
2026-01-20 18:24 ` John R Levine via TUHS
2026-01-20 18:59 ` Paul Winalski via TUHS
2026-01-20 19:03 ` John R Levine via TUHS
2026-01-20 19:58 ` Phil Budne via TUHS
2026-01-20 20:59 ` brantley coile via TUHS
2026-01-20 20:05 ` Paul Winalski via TUHS
2026-01-20 19:47 ` [TUHS] Unix use of VAX protection modes Paul Winalski via TUHS
2026-01-20 22:07 ` [TUHS] " Warner Losh via TUHS
2026-01-20 23:10 ` Clem Cole via TUHS
2026-01-21 2:08 ` Erik E. Fair via TUHS
2026-01-21 2:19 ` segaloco via TUHS
2026-01-21 2:48 ` Bakul Shah via TUHS
2026-01-21 14:51 ` Larry McVoy via TUHS
2026-01-21 15:38 ` Rich Salz via TUHS
2026-01-21 15:47 ` Larry McVoy via TUHS
2026-01-21 21:30 ` Jonathan Gray via TUHS
2026-01-21 22:32 ` Clem Cole via TUHS
2026-01-21 22:35 ` Clem Cole via TUHS
2026-01-21 23:43 ` Tom Lyon via TUHS
2026-01-21 23:48 ` Clem Cole via TUHS
2026-01-21 15:52 ` Warner Losh via TUHS
2026-01-21 15:59 ` Warner Losh via TUHS
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).