* [TUHS] Re: history of virtual address space
2026-01-19 19:44 [TUHS] " ron minnich via TUHS
@ 2026-01-19 20:06 ` brantley coile via TUHS
2026-01-19 22:46 ` Ron Natalie via TUHS
` (2 subsequent siblings)
3 siblings, 0 replies; 20+ messages in thread
From: brantley coile via TUHS @ 2026-01-19 20:06 UTC (permalink / raw)
To: ron minnich; +Cc: The Eunuchs Hysterical Society
We got first edition Unix working some years ago. The user program was loaded at
orig = 0 .
core = orig+40000
That's octal so 16k.
I did a port of Seventh Edition where I put the kernel and user programs at zero. That was a mistake. That was in 1986. I then moved the user program to 4K and the kernel to 0x80000000. I got that from BSD. Don't know if it was the first.
Maybe you could peep at 32V, but I'm sure it loads the kernel at 1<31. It's how the VAX hardware works, if I'm not mistaken.
https://www.tuhs.org/Archive/Distributions/Research/Dennis_v1/PreliminaryUnixImplementationDocument_Jun72.pdf
bwc
> On Jan 19, 2026, at 2:44 PM, ron minnich via TUHS <tuhs@tuhs.org> wrote:
>
> I was trying to remember how virtual address space evolved in Unix.
>
> When I joined the game in v6 days, processes had virtual 0->0xffff, with
> the potential for split I/D IIRC, getting you that bit of extra space. The
> kernel also had virtual 0->0xffff, also with split I/D. The kernel
> accessed user data with mov/load from previous address space instructions.
> But also on the 11, trap vectors chewed up a chunk of low physical memory,
> and the diode ROM took out some chunk of high physical. The bootstrap you
> set from the front panel started with many 1's.
>
> Later, when we had bigger address spaces on 32- bit systems, page 0 was
> made unavailable in users address space to catch NULL pointers. Did that
> happen on the VAX at some point, or later?
>
> Later we got the "kernel is high virtual with bit 31 set" -- when did that
> first happen?
> At some point, the conventions solidify, in the sense that kernels have
> high order bits all 1s, and user has page 0x200000 and up; and user mode
> does not get any virtual address with high order bit set. My rough memory
> is that this convention became most common in the 90s.
>
> So, for the pre-mmu Unix, what addresses did processes get, as opposed to
> the kernel? Did the kernel get loaded to, and run from, high physical? Did
> processes get memory "above trap vectors" and below the kernel?
>
> For the era of MMUs, I kmow the data general 32-bit systems put user memory
> in "high virtual", which turned out to be really painful, as so many Unix
> programs used address 0 (unknown to their authors!). What other Unix
> systems used a layout where the kernel was in low virtual?
>
> I'm not even sure where to find these answers, so apologies in advance for
> the questions.
^ permalink raw reply [flat|nested] 20+ messages in thread
* [TUHS] Re: history of virtual address space
@ 2026-01-19 22:33 Noel Chiappa via TUHS
0 siblings, 0 replies; 20+ messages in thread
From: Noel Chiappa via TUHS @ 2026-01-19 22:33 UTC (permalink / raw)
To: tuhs; +Cc: jnc
> From: Ron Minnich via TUHS tuhs at tuhs.org
> on the 11, trap vectors chewed up a chunk of low physical memory
I don't think so: I have a note that "Trap and interrupt vectors are fetched
from Kernel Data space, when memory management is enabled" (can't cite a
manual page, sorry, but I'm pretty sure that's right). So they could be
anywhere, physically.
Of course, in UNIX, Kernel Data space is at physical 0, so in _practice_, in
UNIX, they are in low physical memory.
Actually, there's:
#define B_RELOC 0200 /* no longer used */
in buf.h, and I have a vague memory of reading that at same point, buffers'
physical address was _not_ the same as their virtual addresses (which is what
all the hair with 'sysfix', etc, is needed to do), so who knows if that was
always true (if buffers were not always in low physical memory).
> What other Unix systems used a layout where the kernel was in low
> virtual?
In MINI-UNIX, user programs are loaded and run at 060000; the kernel was in
low memory. (Probably LSX too, but I don't know much about that.) Not sure if
that's what you meant!
Noel
^ permalink raw reply [flat|nested] 20+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-19 19:44 [TUHS] " ron minnich via TUHS
2026-01-19 20:06 ` [TUHS] " brantley coile via TUHS
@ 2026-01-19 22:46 ` Ron Natalie via TUHS
2026-01-19 23:10 ` Luther Johnson via TUHS
2026-01-20 0:05 ` Warner Losh via TUHS
2026-01-20 2:22 ` John Levine via TUHS
3 siblings, 1 reply; 20+ messages in thread
From: Ron Natalie via TUHS @ 2026-01-19 22:46 UTC (permalink / raw)
To: ron minnich, The Eunuchs Hysterical Society
The original UNIX PDP-11 address space (and this remained constant
throughout the years) started at location 0 and went up by 64 byte
chunks. This was done in seven of the eight segments the PDP-11
hardware did. The stack started in the last one and went downward
(again in 64 byte chunks) from the top of the 16-bit address space.
If you were running in 407 mode, it was all the same as far as code and
data. All writable.
In 410 mode (pure text), the segments containing the code were write
protected against the user (this also allowed them to be shared between
instances of the program and even locked in swap with the sticky bit).
A 411 (on hardware that supported it) split the code and data segments
into separate segments. You couldn’t even access the code as data on
the user side (thwarting the nargs function).
The kernel ran mostly mapped to physical address. However, the user
structure of the current running process was kept in the high segment.
When we went to Vax, pretty much the same idea was kept (programs
started at location zero and grew up). Some moron decided not only to
start at zero, but put a 0 at location 0 which led to lots of latent
bugs. Later when we got the Gould SELs in we had page zero unmapped
to catch null pointer access, but some programs we didn’t have source to
(notably Oracle) still expected to access *0 with impunity, so we added
a flag we could poke into the header called “braindead vax compatibility
mode” that mapped a zero at zero for these programs.
Things got even stranger when we ported to the HEP which had four kinds
of memory
^ permalink raw reply [flat|nested] 20+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-19 22:46 ` Ron Natalie via TUHS
@ 2026-01-19 23:10 ` Luther Johnson via TUHS
2026-01-20 0:48 ` Alan Coopersmith via TUHS
0 siblings, 1 reply; 20+ messages in thread
From: Luther Johnson via TUHS @ 2026-01-19 23:10 UTC (permalink / raw)
To: tuhs
I once found an old version of RCS that expected *0 = 0, and I think
many other programs did as well. I'm not sure which versions of Unix did
or still do this any more, but I believe many had the first page in
virtual address space for user program processes, mapped to a read-only
page of zeroes, to support this. If I'm remembering that wrong, I'm
interested as the other posters here are, in the details of which did
which sorts of things.
On 01/19/2026 03:46 PM, Ron Natalie via TUHS wrote:
> The original UNIX PDP-11 address space (and this remained constant
> throughout the years) started at location 0 and went up by 64 byte
> chunks. This was done in seven of the eight segments the PDP-11
> hardware did. The stack started in the last one and went downward
> (again in 64 byte chunks) from the top of the 16-bit address space.
> If you were running in 407 mode, it was all the same as far as code
> and data. All writable.
> In 410 mode (pure text), the segments containing the code were write
> protected against the user (this also allowed them to be shared
> between instances of the program and even locked in swap with the
> sticky bit). A 411 (on hardware that supported it) split the code
> and data segments into separate segments. You couldn’t even access
> the code as data on the user side (thwarting the nargs function).
>
> The kernel ran mostly mapped to physical address. However, the user
> structure of the current running process was kept in the high segment.
>
> When we went to Vax, pretty much the same idea was kept (programs
> started at location zero and grew up). Some moron decided not only
> to start at zero, but put a 0 at location 0 which led to lots of
> latent bugs. Later when we got the Gould SELs in we had page zero
> unmapped to catch null pointer access, but some programs we didn’t
> have source to (notably Oracle) still expected to access *0 with
> impunity, so we added a flag we could poke into the header called
> “braindead vax compatibility mode” that mapped a zero at zero for
> these programs.
>
> Things got even stranger when we ported to the HEP which had four
> kinds of memory
>
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-19 19:44 [TUHS] " ron minnich via TUHS
2026-01-19 20:06 ` [TUHS] " brantley coile via TUHS
2026-01-19 22:46 ` Ron Natalie via TUHS
@ 2026-01-20 0:05 ` Warner Losh via TUHS
2026-01-20 16:49 ` Rich Salz via TUHS
2026-01-20 23:07 ` Clem Cole via TUHS
2026-01-20 2:22 ` John Levine via TUHS
3 siblings, 2 replies; 20+ messages in thread
From: Warner Losh via TUHS @ 2026-01-20 0:05 UTC (permalink / raw)
To: ron minnich; +Cc: The Eunuchs Hysterical Society
On Mon, Jan 19, 2026 at 12:45 PM ron minnich via TUHS <tuhs@tuhs.org> wrote:
> I was trying to remember how virtual address space evolved in Unix.
>
> When I joined the game in v6 days, processes had virtual 0->0xffff, with
> the potential for split I/D IIRC, getting you that bit of extra space. The
> kernel also had virtual 0->0xffff, also with split I/D. The kernel
> accessed user data with mov/load from previous address space instructions.
> But also on the 11, trap vectors chewed up a chunk of low physical memory,
> and the diode ROM took out some chunk of high physical. The bootstrap you
> set from the front panel started with many 1's.
>
> Later, when we had bigger address spaces on 32- bit systems, page 0 was
> made unavailable in users address space to catch NULL pointers. Did that
> happen on the VAX at some point, or later?
>
4.2BSD on the Vax dereferencing 0 would seg fault. I'm not sure when this
started. I am sure that System V or 32V on the Vax dereferencing 0 gave
0. There were many USENET threads about this in the mid 80s. I have first
hand experience with 4.2BSD, but not the others.
> Later we got the "kernel is high virtual with bit 31 set" -- when did that
> first happen?
> At some point, the conventions solidify, in the sense that kernels have
> high order bits all 1s, and user has page 0x200000 and up; and user mode
> does not get any virtual address with high order bit set. My rough memory
> is that this convention became most common in the 90s.
>
I think that's a 32-bit thing as well. Part of it was driven by
architecture specific
concerns. BSD has linked its kernels at 0xc0000000 since I think 4BSD (I
didn't
check). The space between 0x80000000 and 0xbfffffff sometimes was kernel,
sometimes
user depending on the architecture and kernel implementation. FreeBSD does
4G/4G for 32-bit kernels for PowerPC and i386 (though it used to have a
3G/1G split
for i386). 32-bit arm, though, is still just 3G/1G for user/kernel split.
IIRC, there were some
that had a 2G/2G split, but they escape me at the moment (which is the
classic 'high bit is
kernel).
64-bit architectures make it super easy to have lots of upper bits reserved
for this
or that, and is beyond the scope of this post.
> So, for the pre-mmu Unix, what addresses did processes get, as opposed to
> the kernel? Did the kernel get loaded to, and run from, high physical? Did
> processes get memory "above trap vectors" and below the kernel?
>
16-bit Venix loaded the programs at CS:0/DS:0 but CS and DS specified
segments
that were well above the hardware interrupt vectors at the bottom and the
ROM
reset vectors at the top. 16-bit Venix expected programs not to be naughty
and set
their own DS/ES, but many did to ill effect (since the first trap would set
them back
on return).
> For the era of MMUs, I kmow the data general 32-bit systems put user memory
> in "high virtual", which turned out to be really painful, as so many Unix
> programs used address 0 (unknown to their authors!). What other Unix
> systems used a layout where the kernel was in low virtual?
>
I've not seen any, but that doesn't mean they don't exist.
Warner
> I'm not even sure where to find these answers, so apologies in advance for
> the questions.
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-19 23:10 ` Luther Johnson via TUHS
@ 2026-01-20 0:48 ` Alan Coopersmith via TUHS
0 siblings, 0 replies; 20+ messages in thread
From: Alan Coopersmith via TUHS @ 2026-01-20 0:48 UTC (permalink / raw)
To: tuhs
On 1/19/26 15:10, Luther Johnson via TUHS wrote:
> I once found an old version of RCS that expected *0 = 0, and I think many other
> programs did as well. I'm not sure which versions of Unix did or still do this
> any more, but I believe many had the first page in virtual address space for
> user program processes, mapped to a read-only page of zeroes, to support this.
> If I'm remembering that wrong, I'm interested as the other posters here are, in
> the details of which did which sorts of things.
Deep in the SunOS SCCS archives, the initial checkin of the string(3) man page,
labeled as:
D 1.1 83/08/30 15:57:14 wnj 1 0 00103/00000/00000
date and time created 83/08/30 15:57:14 by wnj
contains this warning in the "BUGS" section:
On the Sun processor (and on some other machines), you can NOT use a
zero pointer to indicate a null string. A zero pointer is an error and
results in an abort of the program. If you wish to indicate a null
string, you must have a pointer that points to an explicit null string.
On PDP-11's and VAX'en, a source pointer of zero (0) can generally be
used to indicate a null string. Programmers using NULL to represent an
empty string should be aware of this portability issue.
This warning was apparently not sufficient, since in 1994, Bill Shannon added
the "0@0.so.1" object to Solaris, which you could use with LD_PRELOAD to mmap
a page from /dev/zero to the page at address 0, as described in the ld.so.1(1)
man page:
The user compatibility library /usr/lib/0@0.so.1 provides a mechanism
that establishes a value of 0 at location 0. Some applications exist
that erroneously assume a null character pointer should be treated the
same as a pointer to a null string. A segmentation violation occurs in
these applications when a null character pointer is accessed. If this
library is added to such an application at runtime using LD_PRELOAD,
the library provides an environment that is sympathetic to this errant
behavior. However, the user compatibility library is intended neither
to enable the generation of such applications, nor to endorse this par-
ticular programming practice.
In many cases, the presence of /usr/lib/0@0.so.1 is benign, and it can
be preloaded into programs that do not require it. However, there are
exceptions. Some applications, such as the JVM (Java Virtual Machine),
require that a segmentation violation be generated from a null pointer
access. Applications such as the JVM should not preload
/usr/lib/0@0.so.
There are more lines of comments than code in the source for it:
https://github.com/illumos/illumos-gate/blob/master/usr/src/cmd/sgs/0%400/common/0%400.c
--
-Alan Coopersmith- alan.coopersmith@oracle.com
Oracle Solaris Engineering - https://blogs.oracle.com/solaris
^ permalink raw reply [flat|nested] 20+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-19 19:44 [TUHS] " ron minnich via TUHS
` (2 preceding siblings ...)
2026-01-20 0:05 ` Warner Losh via TUHS
@ 2026-01-20 2:22 ` John Levine via TUHS
2026-01-20 15:45 ` Paul Winalski via TUHS
3 siblings, 1 reply; 20+ messages in thread
From: John Levine via TUHS @ 2026-01-20 2:22 UTC (permalink / raw)
To: tuhs
It appears that ron minnich via TUHS <rminnich@gmail.com> said:
>I was trying to remember how virtual address space evolved in Unix.
>
>When I joined the game in v6 days, processes had virtual 0->0xffff, with
>the potential for split I/D IIRC, getting you that bit of extra space. The
>kernel also had virtual 0->0xffff, also with split I/D. The kernel
> accessed user data with mov/load from previous address space instructions.
>But also on the 11, trap vectors chewed up a chunk of low physical memory,
>and the diode ROM took out some chunk of high physical. The bootstrap you
>set from the front panel started with many 1's.
>
>Later, when we had bigger address spaces on 32- bit systems, page 0 was
>made unavailable in users address space to catch NULL pointers. Did that
>happen on the VAX at some point, or later?
Someone said in 4.2BSD which seems right. I fixed my share of code that
assumed that *0 == 0
>Later we got the "kernel is high virtual with bit 31 set" -- when did that
>first happen?
That's how VAX addressing worked. User programs were in the low half
of the address space, the kernel in the high half.
^ permalink raw reply [flat|nested] 20+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-20 2:22 ` John Levine via TUHS
@ 2026-01-20 15:45 ` Paul Winalski via TUHS
2026-01-20 18:24 ` John R Levine via TUHS
2026-01-20 20:05 ` Paul Winalski via TUHS
0 siblings, 2 replies; 20+ messages in thread
From: Paul Winalski via TUHS @ 2026-01-20 15:45 UTC (permalink / raw)
To: John Levine; +Cc: tuhs
On Mon, Jan 19, 2026 at 9:22 PM John Levine via TUHS <tuhs@tuhs.org> wrote:
>
> >Later we got the "kernel is high virtual with bit 31 set" -- when did that
> >first happen?
>
> That's how VAX addressing worked. User programs were in the low half
> of the address space, the kernel in the high half.
>
It has nothing to do with how the VAX implements virtual addressing per
se. It's more to do with the way that the various a.out formats for
program images (OMAGIC, NMAGIC, ZMAGIC) all assume that program addressing
starts at 0. It offers the greatest convenience and flexibility if the
parts of kernel address space that must be mapped into user process address
space are at the high end of the address space, well out of the way of user
code and data.
VAX/VMS had the convention that all addresses with bit 31 on were reserved
to the operating system. And the first page of process address space
(0-512K) was both read- and write-protected so that any attempt to use a
null pointer or small integer as an address generated an exception. IMO
one of the best design decisions ever done in an ABI. It caught countless
program errors.
Unix seems to have followed VAX/VMS's lead on address space layout on the
VAX.
Where did Unix put the process stack on VAX? In VAX/VMS it was at the top
of the user VA space and grew downward. Contrariwise the top of heap
memory (the break in Unix-speak) started just after the statically
allocated part of the program image (including any mapped shared images)
and grew upwards.
-Paul W.
^ permalink raw reply [flat|nested] 20+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-20 0:05 ` Warner Losh via TUHS
@ 2026-01-20 16:49 ` Rich Salz via TUHS
2026-01-20 18:18 ` Ron Natalie via TUHS
2026-01-20 19:38 ` Warner Losh via TUHS
2026-01-20 23:07 ` Clem Cole via TUHS
1 sibling, 2 replies; 20+ messages in thread
From: Rich Salz via TUHS @ 2026-01-20 16:49 UTC (permalink / raw)
To: Warner Losh; +Cc: The Eunuchs Hysterical Society
On Mon, Jan 19, 2026 at 7:05 PM Warner Losh via TUHS <tuhs@tuhs.org> wrote:
> 4.2BSD on the Vax dereferencing 0 would seg fault. I'm not sure when this
> started. I am sure that System V or 32V on the Vax dereferencing 0 gave
> 0.
Are you sure? My memory is the opposite. For example,
https://www.tuhs.org/Usenet/comp.bugs.4bsd/1989-July/003016.html
^ permalink raw reply [flat|nested] 20+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-20 16:49 ` Rich Salz via TUHS
@ 2026-01-20 18:18 ` Ron Natalie via TUHS
2026-01-20 19:38 ` Warner Losh via TUHS
1 sibling, 0 replies; 20+ messages in thread
From: Ron Natalie via TUHS @ 2026-01-20 18:18 UTC (permalink / raw)
To: Rich Salz, Warner Losh; +Cc: The Eunuchs Hysterical Society
Indeed, 4BSD had a zero located at location zero.
Amusingly the PDP-11 had code down there. You knew you were
dereferencing zero when “p&P6” showed up in your output. This is what
was left printable in the first few instructions of crt0.o (a setd
instruction followed by others).
------ Original Message ------
From "Rich Salz via TUHS" <tuhs@tuhs.org>
To "Warner Losh" <imp@bsdimp.com>
Cc "The Eunuchs Hysterical Society" <tuhs@tuhs.org>
Date 1/20/2026 11:49:25 AM
Subject [TUHS] Re: history of virtual address space
>On Mon, Jan 19, 2026 at 7:05 PM Warner Losh via TUHS <tuhs@tuhs.org> wrote:
>
>> 4.2BSD on the Vax dereferencing 0 would seg fault. I'm not sure when this
>> started. I am sure that System V or 32V on the Vax dereferencing 0 gave
>> 0.
>
>
>Are you sure? My memory is the opposite. For example,
>https://www.tuhs.org/Usenet/comp.bugs.4bsd/1989-July/003016.html
^ permalink raw reply [flat|nested] 20+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-20 15:45 ` Paul Winalski via TUHS
@ 2026-01-20 18:24 ` John R Levine via TUHS
2026-01-20 18:59 ` Paul Winalski via TUHS
2026-01-20 20:05 ` Paul Winalski via TUHS
1 sibling, 1 reply; 20+ messages in thread
From: John R Levine via TUHS @ 2026-01-20 18:24 UTC (permalink / raw)
To: Paul Winalski; +Cc: tuhs
> On Mon, Jan 19, 2026 at 9:22 PM John Levine via TUHS <tuhs@tuhs.org> wrote:
>
>>
>>> Later we got the "kernel is high virtual with bit 31 set" -- when did that
>>> first happen?
>>
>> That's how VAX addressing worked. User programs were in the low half
>> of the address space, the kernel in the high half.
>>
>
> It has nothing to do with how the VAX implements virtual addressing per
> se. ...
The 1981 VAX Architecture Handbook that I have in my hand disagrees with
you. The Memory Management chapter says that the top half of the address
space is the System (S) while the bottom half is Process. Process is
further divided into P0 which grows up from 0 and P1 which grows down from
the middle. There is one S page table which is in real memory. The P0
and P1 page tables are in S address space, giving the effect of two-level
page tables. Each page table is logically contiguous, with a base and
length register for each.
The a.out program layout is the only one that makes sense on a Vax, code
starts at zero, stack grows down from 2GB.
R's,
John
^ permalink raw reply [flat|nested] 20+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-20 18:24 ` John R Levine via TUHS
@ 2026-01-20 18:59 ` Paul Winalski via TUHS
2026-01-20 19:03 ` John R Levine via TUHS
0 siblings, 1 reply; 20+ messages in thread
From: Paul Winalski via TUHS @ 2026-01-20 18:59 UTC (permalink / raw)
To: John R Levine; +Cc: tuhs
On Tue, Jan 20, 2026 at 1:24 PM John R Levine <johnl@taugh.com> wrote:
> > On Mon, Jan 19, 2026 at 9:22 PM John Levine via TUHS <tuhs@tuhs.org>
> wrote:
> >
> >>
> >>> Later we got the "kernel is high virtual with bit 31 set" -- when did
> that
> >>> first happen?
> >>
> >> That's how VAX addressing worked. User programs were in the low half
> >> of the address space, the kernel in the high half.
> >>
> >
> > It has nothing to do with how the VAX implements virtual addressing per
> > se. ...
>
> The 1981 VAX Architecture Handbook that I have in my hand disagrees with
> you. The Memory Management chapter says that the top half of the address
> space is the System (S) while the bottom half is Process. Process is
> further divided into P0 which grows up from 0 and P1 which grows down from
> the middle. There is one S page table which is in real memory. The P0
> and P1 page tables are in S address space, giving the effect of two-level
> page tables. Each page table is logically contiguous, with a base and
> length register for each.
>
> The a.out program layout is the only one that makes sense on a Vax, code
> starts at zero, stack grows down from 2GB.
>
> I am very familiar with the 1981 VAX Architecture Handbook. It is a
somewhat misleading document in that it is describing both how the VAX
hardware works (this is what I would call the "VAX Architecture") but also
the Application Binary Interface (ABI) of the VAX/VMS operating system.
Remember that at the time DEC was adopting a "one architecture, one
operating system" approach on VAX. They were trying to intimately link VAX
(the hardware architecture) and VMS (the operating system) in the minds of
both DEC's own engineers and their customers.
The Memory Management chapter is describing the memory layout of a VAX/VMS
process. If you look at the hardware architecture side of things, you will
see that there is absolutely no requirement or constraint at the hardware
level to have all of the virtual addresses with the high bit on be reserved
to the OS and those with the high bit off to user space. Each page table
entry has its own protection bits that describe the read and write
permissions for each execution mode (user/supervisor/executive/kernel).
Bottom line is, back in the early days of VAX, when DEC referred to "VAX
Architecture" they meant both the hardware architecture of VAX and the OS
architecture of VMS. Contrast this with "PDP-11 Architecture", which meant
only the hardware design, not the veritable zoo of operating systems that
ran on the PDP-11, each of which had its own peculiarities regarding memory
addressing.
One could have, if one wanted, done the opposite and put user address space
in the address range with the high bit on and the kernel address space in
the low part of the address space. It would be fully conformant with the
VAX page table layout and supported by all then-extant (11/780/750/730) and
future VAXen. Unix chose to follow the VMS design (as documented in the
VAX Architecture Manual) pretty much because there was no reason not to.
-Paul W.
^ permalink raw reply [flat|nested] 20+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-20 18:59 ` Paul Winalski via TUHS
@ 2026-01-20 19:03 ` John R Levine via TUHS
2026-01-20 19:58 ` Phil Budne via TUHS
0 siblings, 1 reply; 20+ messages in thread
From: John R Levine via TUHS @ 2026-01-20 19:03 UTC (permalink / raw)
To: Paul Winalski; +Cc: tuhs
On Tue, 20 Jan 2026, Paul Winalski wrote:
> The Memory Management chapter is describing the memory layout of a VAX/VMS
> process. If you look at the hardware architecture side of things, you will
> see that there is absolutely no requirement or constraint at the hardware
> level to have all of the virtual addresses with the high bit on be reserved
> to the OS and those with the high bit off to user space. Each page table
> entry has its own protection bits that describe the read and write
> permissions for each execution mode (user/supervisor/executive/kernel).
Did you miss the bit about the S page tables being in real memory and the
P0 and P1 in S memory, with P1 growing down from 2GB? Really, the
hardware dictates the layout. You might want to find a copy of the
handbook and refresh your memory.
> One could have, if one wanted, done the opposite and put user address space
> in the address range with the high bit on and the kernel address space in
> the low part of the address space. It would be fully conformant with the
> VAX page table layout and supported by all then-extant (11/780/750/730) and
> future VAXen.
Nope. Please reead the handbook.
Regards,
John Levine, johnl@taugh.com, Taughannock Networks, Trumansburg NY
Please consider the environment before reading this e-mail. https://jl.ly
^ permalink raw reply [flat|nested] 20+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-20 16:49 ` Rich Salz via TUHS
2026-01-20 18:18 ` Ron Natalie via TUHS
@ 2026-01-20 19:38 ` Warner Losh via TUHS
2026-01-20 21:51 ` Warner Losh via TUHS
1 sibling, 1 reply; 20+ messages in thread
From: Warner Losh via TUHS @ 2026-01-20 19:38 UTC (permalink / raw)
To: Rich Salz; +Cc: The Eunuchs Hysterical Society
On Tue, Jan 20, 2026, 9:49 AM Rich Salz <rich.salz@gmail.com> wrote:
>
>
> On Mon, Jan 19, 2026 at 7:05 PM Warner Losh via TUHS <tuhs@tuhs.org>
> wrote:
>
>> 4.2BSD on the Vax dereferencing 0 would seg fault. I'm not sure when this
>> started. I am sure that System V or 32V on the Vax dereferencing 0 gave
>> 0.
>
>
> Are you sure? My memory is the opposite. For example,
> https://www.tuhs.org/Usenet/comp.bugs.4bsd/1989-July/003016.html
>
Maybe it was a local hack. I know for sure that the vax assembler i wrote
definitely core dump when i did *NULL... it may have been 4.3 by then... I
also recall a prof or TA saying that 4.1BSD burned a lot of time on
infinite loops that students wrote, but now they are core dumps... But I've
not looked at the 4.3 sources to see if that's unmapped.
Warner
>
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-20 19:03 ` John R Levine via TUHS
@ 2026-01-20 19:58 ` Phil Budne via TUHS
2026-01-20 20:59 ` brantley coile via TUHS
0 siblings, 1 reply; 20+ messages in thread
From: Phil Budne via TUHS @ 2026-01-20 19:58 UTC (permalink / raw)
To: tuhs
The "feature" of having user and system use non-overlapping virtual
address ranges makes reading user data (eg system call arguments) easy (*)
The VAX reserving a full half of the address space for the system
software has always struck me of typical of the mindset of
kernel/monitor developers thru the ages: Cycles expended outside of
the system state (ie; by users) are wasted(**)!!
(*) PDP-10's with paging systems all implemented special a "previous
context" flavor of the XCT (execute) instruction for moving data
between system and user spaces (systems with more physical memory than
the full/original 256K word 18-bit virtual address space were common
from the start.
(**) I first noticed it at DEC in the 80's; TOPS-20 monitor developers
hating on EMACS users wasting time by context switching on each
keystroke.
^ permalink raw reply [flat|nested] 20+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-20 15:45 ` Paul Winalski via TUHS
2026-01-20 18:24 ` John R Levine via TUHS
@ 2026-01-20 20:05 ` Paul Winalski via TUHS
1 sibling, 0 replies; 20+ messages in thread
From: Paul Winalski via TUHS @ 2026-01-20 20:05 UTC (permalink / raw)
To: tuhs
On Tue, Jan 20, 2026 at 10:45 AM Paul Winalski <paul.winalski@gmail.com>
wrote:
>
> It has nothing to do with how the VAX implements virtual addressing per
> se. It's more to do with the way that the various a.out formats for
> program images (OMAGIC, NMAGIC, ZMAGIC) all assume that program addressing
> starts at 0. It offers the greatest convenience and flexibility if the
> parts of kernel address space that must be mapped into user process address
> space are at the high end of the address space, well out of the way of user
> code and data.
>
> I found and downloaded a copy of the 1981 VAX Architecture Handbook and it
turns out I was wrong about this. The page table design for VAX defines
three ranges of virtual addresses:
System (S): bit 31 of virtual address on
Program 0 (P0): bits 31 and 30 of virtual address off
Program 1 (P1): bit 31 of virtual address off, bit 30 on
Each page table has a pair of hardware registers defining the base and
length of the page table (SBR/SLR, P0BR/P0LR, P1BR, P1LR). The SBR address
is a physical address. The P0BR and P1BR addresses are virtual addresses
in system (bit 31 on) space.
This design facilitates a virtual address layout where there is a single
system address range occupying high virtual memory and per-process P0 and
P1 regions. P1 is intended to implement a stack that grows downwards from
virtual address 0x7fffffff. P0 is intended to implement code and heap
space growing upwards from 0x00000000. VAX/VMS set the protection on the
first page of memory to allow neither read nor write access to any of the
four protection modes, so that any attempt to use small integers in the
range 0-511 (decimal) caused a seg fault. This caught countless
programming errors.
So the virtual address layout of early Unix on the VAX was indeed dictated
by the page table mechanism of the VAX hardware. Thanks to John Levine for
correcting me on this.
-Paul W.
^ permalink raw reply [flat|nested] 20+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-20 19:58 ` Phil Budne via TUHS
@ 2026-01-20 20:59 ` brantley coile via TUHS
0 siblings, 0 replies; 20+ messages in thread
From: brantley coile via TUHS @ 2026-01-20 20:59 UTC (permalink / raw)
To: Phil Budne; +Cc: tuhs
I don't know if system designers thought that every cycle executed outside of system state was wasted or not, but the idea of four billion bytes of RAM begin installed in the system seemed like a far away fantasy in 1978. The most one could ever cram into a 780 was 128 MB. I was working on a machine with 256 KB and it seemed quite large. The big machines (IBM 370/158 and CDC Cyber 70/74) only had 4 MB and 128 KB. (The 128KB were 60 bits, I grant you.)
Again, no opinion on designers attitudes towards pesky users.
Brantley
> On Jan 20, 2026, at 2:58 PM, Phil Budne via TUHS <tuhs@tuhs.org> wrote:
>
> The "feature" of having user and system use non-overlapping virtual
> address ranges makes reading user data (eg system call arguments) easy (*)
>
> The VAX reserving a full half of the address space for the system
> software has always struck me of typical of the mindset of
> kernel/monitor developers thru the ages: Cycles expended outside of
> the system state (ie; by users) are wasted(**)!!
>
> (*) PDP-10's with paging systems all implemented special a "previous
> context" flavor of the XCT (execute) instruction for moving data
> between system and user spaces (systems with more physical memory than
> the full/original 256K word 18-bit virtual address space were common
> from the start.
>
> (**) I first noticed it at DEC in the 80's; TOPS-20 monitor developers
> hating on EMACS users wasting time by context switching on each
> keystroke.
^ permalink raw reply [flat|nested] 20+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-20 19:38 ` Warner Losh via TUHS
@ 2026-01-20 21:51 ` Warner Losh via TUHS
2026-01-20 22:37 ` Ron Natalie via TUHS
0 siblings, 1 reply; 20+ messages in thread
From: Warner Losh via TUHS @ 2026-01-20 21:51 UTC (permalink / raw)
To: Rich Salz; +Cc: The Eunuchs Hysterical Society
Looking at 4.2BSD sources, I don't see any offset for address 0. a.out
makes it kinda hard to unmap page 0. The compiler would have to put a start
address into the a.out header, and the 4 binaries I checked had an a_entry
of 0.. But it looks like only the stand alone programs use a_entry... But
that's hidden as this inside setregs() in kern_exec.c:
u.u_ar0[PC] = u.u_exdata.ux_entloc+2;
and that type puns to a.out's a_entry, so that's going to jump to 0.
4.3BSD spells it a little differently:
setregs(exdata.ex_exec.a_entry);
but has the same a_entry == 0. setregs moved to vax/machdep.c where it adds
2.
So maybe my memory is bad, maybe we had a local hack. I acn't find out for
sure, but stock 4.3BSD/vax necessarily maps page 0.
Warner
On Tue, Jan 20, 2026 at 12:38 PM Warner Losh <imp@bsdimp.com> wrote:
>
>
> On Tue, Jan 20, 2026, 9:49 AM Rich Salz <rich.salz@gmail.com> wrote:
>
>>
>>
>> On Mon, Jan 19, 2026 at 7:05 PM Warner Losh via TUHS <tuhs@tuhs.org>
>> wrote:
>>
>>> 4.2BSD on the Vax dereferencing 0 would seg fault. I'm not sure when this
>>> started. I am sure that System V or 32V on the Vax dereferencing 0 gave
>>> 0.
>>
>>
>> Are you sure? My memory is the opposite. For example,
>> https://www.tuhs.org/Usenet/comp.bugs.4bsd/1989-July/003016.html
>>
>
>
> Maybe it was a local hack. I know for sure that the vax assembler i wrote
> definitely core dump when i did *NULL... it may have been 4.3 by then... I
> also recall a prof or TA saying that 4.1BSD burned a lot of time on
> infinite loops that students wrote, but now they are core dumps... But I've
> not looked at the 4.3 sources to see if that's unmapped.
>
> Warner
>
>>
>>
^ permalink raw reply [flat|nested] 20+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-20 21:51 ` Warner Losh via TUHS
@ 2026-01-20 22:37 ` Ron Natalie via TUHS
0 siblings, 0 replies; 20+ messages in thread
From: Ron Natalie via TUHS @ 2026-01-20 22:37 UTC (permalink / raw)
To: Warner Losh, Rich Salz; +Cc: The Eunuchs Hysterical Society
There was a start address in the a.out header. This dates back to the
V6 days (or perhaps earlier).
The header was on the PDP-11:
Magic number
Text size
Data Size
BSS size
Symbol Table Syze
ENTRY LOCATION
and the last two 16 bit values were unused.
When 4BSD came to the vax, the new header was pretty much the same only
with longs instead of shorts.
------ Original Message ------
From "Warner Losh via TUHS" <tuhs@tuhs.org>
To "Rich Salz" <rich.salz@gmail.com>
Cc "The Eunuchs Hysterical Society" <tuhs@tuhs.org>
Date 1/20/2026 4:51:00 PM
Subject [TUHS] Re: history of virtual address space
>Looking at 4.2BSD sources, I don't see any offset for address 0. a.out
>makes it kinda hard to unmap page 0. The compiler would have to put a start
>address into the a.out header, and the 4 binaries I checked had an a_entry
>of 0.. But it looks like only the stand alone programs use a_entry... But
>that's hidden as this inside setregs() in kern_exec.c:
> u.u_ar0[PC] = u.u_exdata.ux_entloc+2;
>and that type puns to a.out's a_entry, so that's going to jump to 0.
>
>4.3BSD spells it a little differently:
> setregs(exdata.ex_exec.a_entry);
>but has the same a_entry == 0. setregs moved to vax/machdep.c where it adds
>2.
>
>So maybe my memory is bad, maybe we had a local hack. I acn't find out for
>sure, but stock 4.3BSD/vax necessarily maps page 0.
>
>Warner
>
>On Tue, Jan 20, 2026 at 12:38 PM Warner Losh <imp@bsdimp.com> wrote:
>
>>
>>
>> On Tue, Jan 20, 2026, 9:49 AM Rich Salz <rich.salz@gmail.com> wrote:
>>
>>>
>>>
>>> On Mon, Jan 19, 2026 at 7:05 PM Warner Losh via TUHS <tuhs@tuhs.org>
>>> wrote:
>>>
>>>> 4.2BSD on the Vax dereferencing 0 would seg fault. I'm not sure when this
>>>> started. I am sure that System V or 32V on the Vax dereferencing 0 gave
>>>> 0.
>>>
>>>
>>> Are you sure? My memory is the opposite. For example,
>>> https://www.tuhs.org/Usenet/comp.bugs.4bsd/1989-July/003016.html
>>>
>>
>>
>> Maybe it was a local hack. I know for sure that the vax assembler i wrote
>> definitely core dump when i did *NULL... it may have been 4.3 by then... I
>> also recall a prof or TA saying that 4.1BSD burned a lot of time on
>> infinite loops that students wrote, but now they are core dumps... But I've
>> not looked at the 4.3 sources to see if that's unmapped.
>>
>> Warner
>>
>>>
>>>
^ permalink raw reply [flat|nested] 20+ messages in thread
* [TUHS] Re: history of virtual address space
2026-01-20 0:05 ` Warner Losh via TUHS
2026-01-20 16:49 ` Rich Salz via TUHS
@ 2026-01-20 23:07 ` Clem Cole via TUHS
1 sibling, 0 replies; 20+ messages in thread
From: Clem Cole via TUHS @ 2026-01-20 23:07 UTC (permalink / raw)
To: Warner Losh; +Cc: The Eunuchs Hysterical Society
below
On Mon, Jan 19, 2026 at 7:05 PM Warner Losh via TUHS <tuhs@tuhs.org> wrote:
>
> 4.2BSD on the Vax dereferencing 0 would seg fault.
As the default was an optional setup that allowed the previous(4.1)
behaviour.
> I'm not sure when this started.
4.2BSD was the first time it was the default.
> I am sure that System V or 32V on the Vax dereferencing 0 gave 0.
Yes, the original Vax kernel from BSD inherited that behaviour from 32/V -
more in a minute.
> There were many USENET threads about this in the mid 80s.
>
Exactly. IMO, the worst offender was the V7 Bourne shell implementation;
the way it managed its heap depended on it. By that time, ports of Unix
were available on other ISPs besides the PDP-11. And many people started
to trip over the problem.
At Berkeley, there was much debate, and everyone agreed it was a sin. In
fact its #2 in Henry's famous list of the ten commandments:
Thou shalt not follow the NULL pointer
<http://www.eskimo.com/~scs/C-faq/s5.html>, for chaos and madness await
thee at its end.
The feeling was if we protected page zero, then you could get a page fault,
and we could start fixing utilities. It was interesting when you found one
(I remember stumbling it one in the Rand MH mail system). But the problem
was that, by this time frame, a large body of C code had begun to emerge
that people found useful, but had this hidden flaw.
So, with one of the early pre-4.2 releases (4.1 A/B/C), it could be
configured as an option. I ran the CAD Vaxen with it turned on as soon as
we had a system that could support it. By the time of the full release,
4.2BSD had protected page 0.
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2026-01-20 23:08 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-19 22:33 [TUHS] Re: history of virtual address space Noel Chiappa via TUHS
-- strict thread matches above, loose matches on Subject: below --
2026-01-19 19:44 [TUHS] " ron minnich via TUHS
2026-01-19 20:06 ` [TUHS] " brantley coile via TUHS
2026-01-19 22:46 ` Ron Natalie via TUHS
2026-01-19 23:10 ` Luther Johnson via TUHS
2026-01-20 0:48 ` Alan Coopersmith via TUHS
2026-01-20 0:05 ` Warner Losh via TUHS
2026-01-20 16:49 ` Rich Salz via TUHS
2026-01-20 18:18 ` Ron Natalie via TUHS
2026-01-20 19:38 ` Warner Losh via TUHS
2026-01-20 21:51 ` Warner Losh via TUHS
2026-01-20 22:37 ` Ron Natalie via TUHS
2026-01-20 23:07 ` Clem Cole via TUHS
2026-01-20 2:22 ` John Levine via TUHS
2026-01-20 15:45 ` Paul Winalski via TUHS
2026-01-20 18:24 ` John R Levine via TUHS
2026-01-20 18:59 ` Paul Winalski via TUHS
2026-01-20 19:03 ` John R Levine via TUHS
2026-01-20 19:58 ` Phil Budne via TUHS
2026-01-20 20:59 ` brantley coile via TUHS
2026-01-20 20:05 ` Paul Winalski via TUHS
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).