The Unix Heritage Society mailing list
 help / color / mirror / Atom feed
* [TUHS] /dev/drum
       [not found] <mailman.1.1525744802.16322.tuhs@minnie.tuhs.org>
@ 2018-05-08 22:39 ` Johnny Billquist
  0 siblings, 0 replies; 105+ messages in thread
From: Johnny Billquist @ 2018-05-08 22:39 UTC (permalink / raw)


On 08.05.18 04:00, jnc at mercury.lcs.mit.edu  (Noel Chiappa) wrote:
>      > From: Johnny Billquist
> 
>      > My point being that ... pages are invisible to the process segments are
>      > very visible. And here we talk from a hardware point of view.
> 
> So you're saying 'segmentation means instructions explicitly include segment
> numbers, and the address space is a two-dimensional array', or 'segmentation
> means pointers explicitly include segment numbers', or something like that?

Not really. I'm trying to understand your argument.

You said:
"BTW, this reminds me of another key differentiator between paging and
segments, which is that paging was originally _invisible_ to the user 
(except
for setting the total size of the process), whereas segmentation is 
explicitly
visible to the user."

And then you used MERT as an example of this.

My point then is, how is MERT any different from mmap() under Unix? 
Would you then say that the paging is visible under Unix, meaning that 
this is then segmentation?

In my view, you are talking about a software concept. And as such, it 
has no bearing on whether a machine have pages or segments, as that is a 
hardware thing and distinction, while anything done as a service by the 
OS is a completely different, and independent question.

> I'm more interested in the semantics that are provided, not bits in
> instructions.

Well, if we talk semantics instead of the hardware, then you can just 
say that any machine is segmented, and you can say that any machine is 
have pages. Because I can certainly make it appear both ways from the 
software point of view for applications running under an OS.
And I can definitely do that on a PDP-11. The OS can force pages to 
always be 8K in size, and the OS can (as done by lots of OSes) provide a 
mechanism that gives you something you call segments.

> It's true that with a large address space, one can sort of simulate
> segmentation. To me, machines which explicitly have segment numbers in
> instructions/pointers are one end of a spectrum of 'segmented machines', but
> that's not a strict requirement. I'm more concerned about how they are used,
> what the system/user gets.

So, again. Where does mmap() put you then?
And, just to point out the obvious, any machine with pages have a page 
table, and the page table entry is selected based on the high bits of 
the virtual address. Exactly the same as on the PDP-11. The only 
difference is the number of pages, and the fact that the page on the 
PDP-11 do not have a fixed length, but can be terminated earlier if wanted.
So, pages are explicitly numbered in pointers on any machine with pages.

> Similarly for paging - fixed sizes (or a small number of sizes) are part of
> the definition, but I'm more interested in how it's used - for demand loading,
> and to simplify main memory allocation purposes, etc.

I don't get it. So, in which way are you still saying that a PDP-11 
don't have pages?

>      >> the semantics available for - and_visible_  to - the user are
>      >> constrained by the mechanisms of the underlying hardware.
> 
>      > That is not the same thing as being visible.
> 
> It doesn't meet the definition above ('segment numbers in
> instructions/pointers'), no. But I don't accept that definition.

I'm trying to find out what your definition is. :-)
And if it is consistent and makes sense... :-)

>      > All of this is so similar to mmap() that we could in fact be having this
>      > exact discussion based on mmap() instead .. I don't see you claiming
>      > that every machine use a segmented model
> 
> mmap() (and similar file->address space mapping mechanisms, which a bunch of
> OS's have supported - TENEX/TOP-20, ITS, etc) are interesting, but to me,
> orthagonal - although it clearly needs support from memory management hardware.

Can you explain how mmap() is any different from the service provided by 
MERT?

> And one can add 'sharing memory between two processes' here, too; very similar
> _mechanisms_  to mmap(), but different goals. (Although I suppose two processes
> could map the same area of a file, and that would give them IPC mapping.)

That how a single copy of shared libraries happen under Unix.
Exactly what happens if you modify the memory depends on what flags you 
give to mmap().

	Johnny

-- 
Johnny Billquist                  || "I'm on a bus
                                   ||  on a psychedelic trip
email: bqt at softjar.se             ||  Reading murder books
pdp is alive!                     ||  tryin' to stay hip" - B. Idol


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
@ 2018-05-07 15:36 Noel Chiappa
  0 siblings, 0 replies; 105+ messages in thread
From: Noel Chiappa @ 2018-05-07 15:36 UTC (permalink / raw)


    > From: Johnny Billquist

    > My point being that ... pages are invisible to the process segments are
    > very visible. And here we talk from a hardware point of view.

So you're saying 'segmentation means instructions explicitly include segment
numbers, and the address space is a two-dimensional array', or 'segmentation
means pointers explicitly include segment numbers', or something like that?

I seem to recall machines where that wasn't so, but I don't have the time to
look for them. Maybe the IBM System 38? The two 'spaces' in the KA10/KI10,
although a degenerate case (fewer even than the PDP-11) are one example.

I'm more interested in the semantics that are provided, not bits in
instructions.

It's true that with a large address space, one can sort of simulate
segmentation. To me, machines which explicitly have segment numbers in
instructions/pointers are one end of a spectrum of 'segmented machines', but
that's not a strict requirement. I'm more concerned about how they are used,
what the system/user gets.

Similarly for paging - fixed sizes (or a small number of sizes) are part of
the definition, but I'm more interested in how it's used - for demand loading,
and to simplify main memory allocation purposes, etc.


    >> the semantics available for - and _visible_ to - the user are
    >> constrained by the mechanisms of the underlying hardware.

    > That is not the same thing as being visible.

It doesn't meet the definition above ('segment numbers in
instructions/pointers'), no. But I don't accept that definition.


    > All of this is so similar to mmap() that we could in fact be having this
    > exact discussion based on mmap() instead .. I don't see you claiming
    > that every machine use a segmented model

mmap() (and similar file->address space mapping mechanisms, which a bunch of
OS's have supported - TENEX/TOP-20, ITS, etc) are interesting, but to me,
orthagonal - although it clearly needs support from memory management hardware.

And one can add 'sharing memory between two processes' here, too; very similar
_mechanisms_ to mmap(), but different goals. (Although I suppose two processes
could map the same area of a file, and that would give them IPC mapping.)

	   Noel


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-05-06 13:07 Noel Chiappa
@ 2018-05-06 15:57 ` Johnny Billquist
  0 siblings, 0 replies; 105+ messages in thread
From: Johnny Billquist @ 2018-05-06 15:57 UTC (permalink / raw)


On 2018-05-06 15:07, Noel Chiappa wrote:
>      > From: Johnny Billquist
> 
>      >> "A logical segment is a piece of contiguous memory, 32 to 32K 16-bit
>      >> words long [which can grow in increments of 32 words]"
> 
>      > But then it is not actually giving programs direct access and
>      > manipulation of the hardware. It is a software construct and service
>      > offered by the OS, and the OS might fiddly around with various hardware
>      > to give this service.
> 
> I don't understand how this is significant: most time-sharing OS's don't give
> the users access to the memory management control hardware?

Right. I would probably say that no time-sharing OS with any kind of 
protection between processes will allow direct access to the hardware.

My point being that (I think we agreed) pages are invisible to the 
process, while segments are very visible. And here we talk from a 
hardware point of view.

So why would it not be significant? It's what the whole definition was 
about.

>      > So the hardware is totally invisible after all.
> 
> Not quite - the semantics available for - and _visible_ to - the user are
> constrained by the mechanisms of the underlying hardware.

That is not the same thing as being visible. The OS gives you some 
mechanisms or services. The OS might base that design on restrictions 
imposed by the underlying hardware, but the underlying hardware is in 
fact totally invisible. A user process use services provided by the OS, 
and does not try to manipulate, or is even aware of the underlying hardware.

> Consider a machine with a KT11-B - it could not provide support for very small
> segments, or be able to adjust the segment size with such small quanta. On the
> other side, the KT11-B could support starting a 'software segment' at any
> 512-byte boundary in the virtual address space, unlike the KT11-C which only
> supports 8KB boundaries.

Yes. But a user program running under some OS with protection, would not 
allow you to access the hardware anyway.

Anything like the MERT segments can in fact be provided with both types 
of memory management. And the program will not be aware of which 
hardware is involved, or how to set this up. The OS deals with all of that.

There might be some constraints on where the segment can appear in your 
virtual address space, but that does not change the fact that the 
service can be provided by the OS with either hardware.

Or, to take RSX PLAS directives as an example. When you create a region 
(which is the equivalent to a segment), it can have alignment on either 
a 64B resolution, or a 512B resolution. And the OS will try to 
accomodate. And where it appears in your virtual memory space, and what 
offsets to use and so on can be requested, but the OS will then give you 
something as close as it can to this, based on hardware restrictions.

All of this is so similar to mmap() that we could in fact be having this 
exact discussion based on mmap() instead, if that were to help.
I don't see you claiming that every machine use a segmented model, but 
running any modern Unix on any hardware will give you this exact same 
tools. Read up on the mmap() system call, if you don't already know it.
Now, would you say that this system call makes the hardware directly 
visible and manipulated by the user program?

   Johnny

-- 
Johnny Billquist                  || "I'm on a bus
                                   ||  on a psychedelic trip
email: bqt at softjar.se             ||  Reading murder books
pdp is alive!                     ||  tryin' to stay hip" - B. Idol


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
@ 2018-05-06 13:07 Noel Chiappa
  2018-05-06 15:57 ` Johnny Billquist
  0 siblings, 1 reply; 105+ messages in thread
From: Noel Chiappa @ 2018-05-06 13:07 UTC (permalink / raw)


    > From: Johnny Billquist

    >> "A logical segment is a piece of contiguous memory, 32 to 32K 16-bit
    >> words long [which can grow in increments of 32 words]"

    > But then it is not actually giving programs direct access and
    > manipulation of the hardware. It is a software construct and service
    > offered by the OS, and the OS might fiddly around with various hardware
    > to give this service.

I don't understand how this is significant: most time-sharing OS's don't give
the users access to the memory management control hardware?

    > So the hardware is totally invisible after all.

Not quite - the semantics available for - and _visible_ to - the user are
constrained by the mechanisms of the underlying hardware.

Consider a machine with a KT11-B - it could not provide support for very small
segments, or be able to adjust the segment size with such small quanta. On the
other side, the KT11-B could support starting a 'software segment' at any
512-byte boundary in the virtual address space, unlike the KT11-C which only
supports 8KB boundaries.

    Noel


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-05-05 13:06 Noel Chiappa
@ 2018-05-05 20:53 ` Johnny Billquist
  0 siblings, 0 replies; 105+ messages in thread
From: Johnny Billquist @ 2018-05-05 20:53 UTC (permalink / raw)


On 2018-05-05 15:06, Noel Chiappa wrote:
>      > From: Johnny Billquist
> 
>      >> in MERT 'segments' (called that) were a basic system primitive, which
>      >> users had access to.
> 
>      > the OS gives you some construct which can easily be mapped on to the
>      > hardware.
> 
> Right. "A logical segment is a piece of contiguous memory, 32 to 32K 16-bit
> words long ... Associated with each segment are an internal segment
> identifiern and an optional global name." So it's clear how that maps onto the
> PDP-11 memory management hardware - and a MERT 'segment' might use more than
> one 'chunk'.

Aha! But then it is not actually giving programs direct access and 
manipulation of the hardware. It is a software construct and service 
offered by the OS, and the OS might fiddly around with various hardware 
to give this service.

So the hardware is totally invisible after all. You might understand how 
the OS realize the service using the PDP-11 hardware, but the program 
itself cannot manipulate the hardware, and is in fact totally oblivious 
to how the hardware works.

>      > It's actually not my definition. Demand paging is a term that have been
>      > used for this for the last 40 years, and is not something there is much
>      > contention about.
> 
> I wasn't talking about "demand paging", but rather your use of the term
> "virtual memory":

Ok. It wasn't totally clear, since you also use a different term for 
demand paging...

>      >>> Virtual memory is just *virtual* memory. It's not "real" or physical
>      >>> in the sense that it has a dedicated location in physical memory
>      >>> ... Instead, each process has its own memory, which might be mapped
>      >>> somewhere in physical memory, but it might also not be.  And one
>      >>> processes address 0 is not the same as another processes address
>      >>> 0. They both have the illusion that they have the full memory address
>      >>> range to them selves, unaware of the fact that there are many
>      >>> processes who also have that same illusion.
> 
> I _like_ having an explicit term for the _concept_ you're describing there; I
> just had a problem with the use of the _term_ "virtual memory" for it - since
> that term already has a different meaning to many people.
> 
> Try Googling "virtual memory" and you turn up things like this: "compensate
> for physical memory shortages by temporarily transferring data from RAM to
> disk". Which is why I proposed calling it "virtual addressing" instead.

Like I said. In the last few years, there has been a growing problem 
with lots of people not understanding virtual memory. I blame it on the 
monoculture we now have, where all people know, have been exposed to, 
and read any code about is x86 and Linux. And thus, they are at a point 
where they created some concepts in their heads, and anything that does 
not match all their preconceived ideas becomes weird.

The wikipedia article on virtual memory on and off have had such 
problems as well. It's not perfect now, but it has improved a lot from 
how it was for a while. But it has also been more correct at times as 
well. The discussions page is really enlightening.

But anyway, I certainly hope things have not in general degenerated 
enough that virtual memory actually have taken on a different meaning. 
There is also the constant issue that pops up with these "revisionist" 
interpretations that there are plenty of old documentation and 
references that contains definitions and usage of the term virtual 
memory that does not fit with their very restrictive interpretation.

> Anyway, this conversation has been very helpful in clarifying my thinking
> about virtual memory/paging. I have updated the CHWiki article based on it:
> 
>    http://gunkies.org/wiki/Virtual_memory
> 
> including the breakdown into three separate (but related) concepts: i) virtual
> addressing, ii) demand loading, and iii) paging. I'd be interested in any
> comments people have.

I might take a peek when I have some time.

>      > Which also begs the question - was there also a RK11-A?
> 
> One assumes there much have been RK11-A's and -B's, otherwise they wouldn't
> have gotten to RK11-C... :-)

Right. :-)

>      > And the "chunks" on a PDP-11, running Unix, RSX or RSTS/E, or something
>      > similar is also totally invisible.
> 
> Right, but not under MERT - although there clearly a single 'software' segment
> might use more than one set of physical 'chunks'.

Well. The PDP-11 chunks certainly, from your description above, would 
appear to be invisible to programs running under MERT. MERT provides a 
segment mechanism, which programs can use, but programs cannot play with 
the hardware. Thus, the hardware is invisible to the program. The MERT 
segment is visible, but such a segment might then involve one or more 
hardware "chunks". The program don't know, and do not deal with them 
individually or directly. In fact, the program would not even need to be 
aware of what the hardware have, or do. It's really invisible to the 
program. The MERT segment thingy is not a direct access or map to the 
hardware.

Or at least it is not, if I understood you correctly.

Those MERT segments sounds identical to the PLAS functionality provided 
by DEC OSes.

> Actuallly, Unix is _somewhat_ similar, in that processes always have separate
> stack and text/data 'areas' (they don't call them 'segments', as far as I
> could see) - and separate text and data 'areas' too, when pure code is in
> use; and any area might use more than one 'chunk'.
> 
> The difference is that Unix doesn't support 'segments' as an OS primitive, the
> way MERT does.

Right. But these are also not direct translations of the hardware 
functionality. The fact that these segments can be more than 8K should 
be enough of an argument showing that this is not the same as the 
hardware, and that the program do not know, care, or need to understand 
how the PDP-11 "chunks" actually appear or work.
(And I'll probably revert to saying pages after this reply, it's too 
much work to try and use a separate name for them.)

   Johnny

-- 
Johnny Billquist                  || "I'm on a bus
                                   ||  on a psychedelic trip
email: bqt at softjar.se             ||  Reading murder books
pdp is alive!                     ||  tryin' to stay hip" - B. Idol


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
@ 2018-05-05 13:06 Noel Chiappa
  2018-05-05 20:53 ` Johnny Billquist
  0 siblings, 1 reply; 105+ messages in thread
From: Noel Chiappa @ 2018-05-05 13:06 UTC (permalink / raw)


    > From: Johnny Billquist

    >> in MERT 'segments' (called that) were a basic system primitive, which
    >> users had access to.

    > the OS gives you some construct which can easily be mapped on to the
    > hardware.

Right. "A logical segment is a piece of contiguous memory, 32 to 32K 16-bit
words long ... Associated with each segment are an internal segment
identifiern and an optional global name." So it's clear how that maps onto the
PDP-11 memory management hardware - and a MERT 'segment' might use more than
one 'chunk'.


    >> I understand your definitions, and like breaking things up into
    >> 'virtual addressing' (which I prefer as the term, see below),
    >> 'non-residence' or 'demand loaded', and 'paging' (breaking into
    >> smallish, equal-sized chunks), but the problem with using "virtual
    >> memory" as a term for the first is that to most people, that term
    >> already has a meaning - the combination of all three.

Actually, after some research, it turns out to be only the first two. But I
digress...

    > It's actually not my definition. Demand paging is a term that have been
    > used for this for the last 40 years, and is not something there is much
    > contention about.

I wasn't talking about "demand paging", but rather your use of the term
"virtual memory":

    >>> Virtual memory is just *virtual* memory. It's not "real" or physical
    >>> in the sense that it has a dedicated location in physical memory
    >>> ... Instead, each process has its own memory, which might be mapped
    >>> somewhere in physical memory, but it might also not be.  And one
    >>> processes address 0 is not the same as another processes address
    >>> 0. They both have the illusion that they have the full memory address
    >>> range to them selves, unaware of the fact that there are many
    >>> processes who also have that same illusion.

I _like_ having an explicit term for the _concept_ you're describing there; I
just had a problem with the use of the _term_ "virtual memory" for it - since
that term already has a different meaning to many people.

Try Googling "virtual memory" and you turn up things like this: "compensate
for physical memory shortages by temporarily transferring data from RAM to
disk". Which is why I proposed calling it "virtual addressing" instead.

    > I must admit that I'm rather surprised if the term really is unknown to
    > you.

No, of course I am familiar with "demand paging".


Anyway, this conversation has been very helpful in clarifying my thinking
about virtual memory/paging. I have updated the CHWiki article based on it:

  http://gunkies.org/wiki/Virtual_memory

including the breakdown into three separate (but related) concepts: i) virtual
addressing, ii) demand loading, and iii) paging. I'd be interested in any
comments people have.


    > Which also begs the question - was there also a RK11-A?

One assumes there much have been RK11-A's and -B's, otherwise they wouldn't
have gotten to RK11-C... :-)

I have no idea if both existed in physical form - one might have been just a
design exercise). However, the photo of the non-RK11-C indicator panel
confirms that at least one of them was actually implemented.


    > And the "chunks" on a PDP-11, running Unix, RSX or RSTS/E, or something
    > similar is also totally invisible.

Right, but not under MERT - although there clearly a single 'software' segment
might use more than one set of physical 'chunks'.

Actuallly, Unix is _somewhat_ similar, in that processes always have separate
stack and text/data 'areas' (they don't call them 'segments', as far as I
could see) - and separate text and data 'areas' too, when pure code is in
use; and any area might use more than one 'chunk'.

The difference is that Unix doesn't support 'segments' as an OS primitive, the
way MERT does.

       Noel


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-05-03 11:39 Noel Chiappa
@ 2018-05-03 21:22 ` Johnny Billquist
  0 siblings, 0 replies; 105+ messages in thread
From: Johnny Billquist @ 2018-05-03 21:22 UTC (permalink / raw)


On 2018-05-03 13:39, Noel Chiappa wrote:
>      > That would be a pretty ugly way to look at the world.
> 
> 'Beauty is in the eye of the beholder', and all that! :-)

That is true... Still, would you consider such a view anything close to 
nice, usable or a good reflection of the hardware? :-)

>      > Not to mention that one segment silently slides over into the next one
>      > if it's more than 8K.
> 
> Again, precedent; IIRC, on the GE-645 Multics, segments were limited to 2^N-1 pages,
> precisely because otherwise incrementing an inter-segment pointer could march off
> the end of one, and into the next! (The -645 was implemented as a 'bag on the side'
> of the non-segmented -635, so things like this were somewhat inevitable.)

Right. But such a limitation does not exist on the PDP-11. You in fact 
normally do have several "chunks" concatenated, so that you have your 
linear virtual address space.

>      > wouldn't you say that the "chunks" on a PDP-11 are invisible to the
>      > user? Unless you are the kernel of course. Or run without protection.
> 
> No, in MERT 'segments' (called that) were a basic system primitive, which
> users had access to. (Very cool system! I really need to get moving on trying
> to recover that...)

But you say that MERT ran with memory protection and users not directly 
able to access the MMU? In such case then, the hardware is not really 
visible, but the OS gives you some construct which can easily be mapped 
on to the hardware.

Or else I am missing something you are saying.

The PLAS directives that were mentioned earlier, which various DEC OSes 
implemented, could be said to reflect segments. In the same way you 
could say that mmap() under Unix gives you a segment. But that is not a 
strict reflection of the hardware.

>      > *Demand* paging is definitely a separate concept from virtual memory.
> 
> Hmmm. I understand your definitions, and like breaking things up into 'virtual
> addressing' (which I prefer as the term, see below), 'non-residence' or
> 'demand loaded', and 'paging' (breaking into smallish, equal-sized chunks),
> but the problem with using "virtual memory" as a term for the first is that to
> most people, that term already has a meaning - the combination of all three.

It's actually not my definition. Demand paging is a term that have been 
used for this for the last 40 years, and is not something there is much 
contention about. I must admit that I'm rather surprised if the term 
really is unknown to you.
Locate any good text on operating system concepts, and you should find 
demand paging properly described. (Or go to Wikipedia, even though there 
is some disdain around it here, it does contain a bunch of information 
and references that are not all useless.)

Virtual memory usually have been equally uncontroversial, but there it 
has started to change in the last 10 years or so. I guess it's partly a 
result of the monoculture we now face, where you have lots of people who 
have never seen or used anything but x86 and Linux, if they have done 
anything close to the memory management. And that in addition with more 
and more people without proper CS educations fooling around in this 
area, and having a somewhat incomplete understanding of the concepts.

>      > There is no real connection between virtual memory and memory
>      > protection. One can exist with or without the other.
> 
> Virtual addressing and memory protection; yes, no connection. (Although the
> former will often give you the latter - if process A can't see, or name,
> process B's memory, it can't damage it.)

Right. Separation of memory can be seen as a form of memory protection 
as well. But that is really just a side effect of not even having the 
ability to refer to the memory, and is not explicitly any form of 
protection.
(Hello, virtual memory once more...)

>      > Might have been just some internal early attempt that never got out of DEC?
> 
> Could be; something similar seems to have happened to the 'RK11-B':
> 
>    http://gunkies.org/wiki/RK11_disk_controller

Which also begs the question - was there also a RK11-A?

>      >> I don't have any problem with several different page sizes, _if it
>      >> engineering sense to support them_.
> 
>      > So, would you then say that such machines do not have pages, but have
>      > segments?
>      > Or where do you draw the line? Is it some function of how many different
>      > sized pages there can be before you would call it segments? ;-)
> 
> No, the number doesn't make a difference (to me). I'm trying to work out what
> the key difference is; in part, it's that segments are first-class objects
> which are visible to the user; paging is almost always hidden under the
> sheets.
> 
> But not always; some OS's allow processes to share pages, or to map file pages
> into address spaces, etc. Which does make it complex to separate the two..

And the "chunks" on a PDP-11, running Unix, RSX or RSTS/E, or something 
similar is also totally invisible. You can expand memory, or even 
through mechanisms as shared memory and PLAS directives have your memory 
space mapping different "objects", but the MMU page registers details 
are totally invisible to you there.

   Johnny

-- 
Johnny Billquist                  || "I'm on a bus
                                   ||  on a psychedelic trip
email: bqt at softjar.se             ||  Reading murder books
pdp is alive!                     ||  tryin' to stay hip" - B. Idol


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
@ 2018-05-03 11:39 Noel Chiappa
  2018-05-03 21:22 ` Johnny Billquist
  0 siblings, 1 reply; 105+ messages in thread
From: Noel Chiappa @ 2018-05-03 11:39 UTC (permalink / raw)


    > That would be a pretty ugly way to look at the world.

'Beauty is in the eye of the beholder', and all that! :-)

    > Not to mention that one segment silently slides over into the next one
    > if it's more than 8K.

Again, precedent; IIRC, on the GE-645 Multics, segments were limited to 2^N-1 pages,
precisely because otherwise incrementing an inter-segment pointer could march off
the end of one, and into the next! (The -645 was implemented as a 'bag on the side'
of the non-segmented -635, so things like this were somewhat inevitable.)


    > wouldn't you say that the "chunks" on a PDP-11 are invisible to the
    > user? Unless you are the kernel of course. Or run without protection.

No, in MERT 'segments' (called that) were a basic system primitive, which
users had access to. (Very cool system! I really need to get moving on trying
to recover that...)


    > *Demand* paging is definitely a separate concept from virtual memory.

Hmmm. I understand your definitions, and like breaking things up into 'virtual
addressing' (which I prefer as the term, see below), 'non-residence' or
'demand loaded', and 'paging' (breaking into smallish, equal-sized chunks),
but the problem with using "virtual memory" as a term for the first is that to
most people, that term already has a meaning - the combination of all three.

(I have painful memories of this sort of thing - the term 'locator' was
invented after we gave up trying to convince people one could have a network
architecture in which not all packets contained addresses. That caused a major
'does not compute' fault in most people's brains! And 'locator' has since been
perverted from its original definition. But I digress.)


    > There is no real connection between virtual memory and memory
    > protection. One can exist with or without the other.

Virtual addressing and memory protection; yes, no connection. (Although the
former will often give you the latter - if process A can't see, or name, 
process B's memory, it can't damage it.)


    > Might have been just some internal early attempt that never got out of DEC?

Could be; something similar seems to have happened to the 'RK11-B':

  http://gunkies.org/wiki/RK11_disk_controller


    >> I don't have any problem with several different page sizes, _if it
    >> engineering sense to support them_.

    > So, would you then say that such machines do not have pages, but have 
    > segments?
    > Or where do you draw the line? Is it some function of how many different 
    > sized pages there can be before you would call it segments? ;-)

No, the number doesn't make a difference (to me). I'm trying to work out what
the key difference is; in part, it's that segments are first-class objects
which are visible to the user; paging is almost always hidden under the
sheets.

But not always; some OS's allow processes to share pages, or to map file pages
into address spaces, etc. Which does make it complex to separate the two..

     Noel


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-30 15:05 Noel Chiappa
  2018-04-30 16:43 ` Paul Winalski
  2018-04-30 21:41 ` Johnny Billquist
@ 2018-05-03  2:54 ` Charles Anthony
  2 siblings, 0 replies; 105+ messages in thread
From: Charles Anthony @ 2018-05-03  2:54 UTC (permalink / raw)


On Mon, Apr 30, 2018 at 8:05 AM, Noel Chiappa <jnc at mercury.lcs.mit.edu>
wrote:

>     > From: Johnny Billquist
>
>
>     > But how do you then view modern architectures which have different
> sized
>     > pages? Are they no longer pages then?
>
> Actually, there is precedent for that. The original Multics hardware, the
> GE-645, supported two page sizes. That was dropped in later machines (the
> Honeywell 6000's) since it was decided that the extra complexity wasn't
> worth
> it.
>
>
7 page sizes. AL39, page 40:

 PTWAM.ADDR
The 18 high-order bits of the 24-bit absolute main memory address of
the page. The hardware ignores low-order bits of this page address
according to page size based on the following:

Page size in words    ADDR bits ignored
    64             none
   128              17
   256            16-17
   512            15-17
  1024            14-17
  2048            13-17
  4096            12-17


I am unsure of exactly which model supported this, but somewhere in the
evolution from 645 to DPS8-M.

-- Charles
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180502/5578b62a/attachment.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-30 15:05 Noel Chiappa
  2018-04-30 16:43 ` Paul Winalski
@ 2018-04-30 21:41 ` Johnny Billquist
  2018-05-03  2:54 ` Charles Anthony
  2 siblings, 0 replies; 105+ messages in thread
From: Johnny Billquist @ 2018-04-30 21:41 UTC (permalink / raw)


On 2018-04-30 17:05, Noel Chiappa wrote:
>      > From: Johnny Billquist
> 
>      > This is where I disagree. The problem is that the chunks in the PDP-11
>      > do not describe things from a zero offset, while a segment do. Only
>      > chunk 0 is describing addresses from a 0 offset. And exactly which chunk
>      > is selected is based on the virtual address, and nothing else.
> 
> Well, you have something of a point, but... it depends on how you look at it.
> If you think of a PDP-11 address as holding two concatenated fields (3 bits of
> 'segment' and 13 bits of 'offset within segment'), not so much.

Err... That would be a pretty ugly way to look at the world. Not to 
mention that one segment silently slides over into the next one if it's 
more than 8K. No, I think I prefer to not try to look at the world that 
way. :-)

> IIRC there are other segmented machines that do things this way - I can't
> recall the details of any off the top of my head. (Well, there the KA10/KI10,
> with their writeable/write-protected 'chunks', but that's a bit of a
> degenerate case. I'm sure there is some segmented machine that works that way,
> but I can't recall it.)

The KA/KI without the Twenex pager pretty much only had two segments, 
unless I remember wrong. It was the low segment and the high segment. 
Well, ok, that also implied that it was based on the virtual address, 
but these two segments were definitely treated as two separate spaces, 
that were never concatenated.

> BTW, this reminds me of another key differentiator between paging and
> segments, which is that paging was originally _invisible_ to the user (except
> for setting the total size of the process), whereas segmentation is explicitly
> visible to the user.

Good point.
Well, wouldn't you say that the "chunks" on a PDP-11 are invisible to 
the user? Unless you are the kernel of course. Or run without protection.

> I think there is at least one PDP-11 OS which makes the 'chunks' visible to
> the user - MERT (and speaking of which, I need to get back on my project of
> trying to track down source/documentation for it).

Don't know that system. But you could just argue that RT-11 also makes 
it all visible as well. Any OS which don't give you proper protection 
will allow you to play with all hardware, no matter how it appears.

>      > Demand paging really is a separate thing from virtual memory. It's a
>      > very bad thing to try and conflate the two.
> 
> Really? I always worked on the basis that the two terms were synonyms - but
> I'm very open to the possibility that there is a use to having them have
> distinct meanings.

*Demand* paging is definitely a separate concept from virtual memory.

OSes like RSX, RSTS/E, or Unix, on PDP-11s do give you virtual memory. 
However, none of them implement demand paging. The PDP-11 can, as you 
noted, also support demand paging, but there is little need for such an 
advanced mechanism on that machine, for several reasons.

When I first read about demand paging, it was in the context of VMS, and 
the simplicitly, elegance, and efficiency of it really impressed me.

Starting execution of an image in VMS really just meant that the OS read 
in the executable image meta information, which gave things like what 
memory addressed were defined. VMS created the page table, and marked 
every page as invalid, and then it started executing in the process. The 
first thing that would happen would of course be that you got a page 
fault by trying to fetch the instruction at the first address of the 
program. VMS immediately suspends execution until it have read in that 
page from the binary, marks that page as now having a valid translation, 
and then resumes execution. Of course, most likely you'll have a whole 
storm of page faults almost immediately after a process starts 
executing, but after a while, you'll be in a sort of balance where you 
mostly run without any more page faults.
(I would assume Unix does it the same way on modern machines.)

Very different than in RSX, for example, where the OS first makes sure 
all required virtual memory have valid translations before the process 
is allowed to execute.

> I see a definition of 'virtual memory' below, but what would you use for
> 'paging'?

"Paging" is a bit of an unclear term for me here. It might be referring 
to the reading in of a page at a page fault, which would be a demand 
paging situation, the paging out of individual pages as a demand paged 
system requires pages to fulfill needs of a different process, or it 
might mean the reading in or writing out of all pages of a process, in 
which case it would be close to a synonym with "swapping". So, in which 
way do you mean "paging"?

> Now that I think about it, there are actually _three_ concepts: 'virtual
> memory', as you define it; what I will call 'non-residence' - i.e. a process
> can run without _all_ of its virtual memory being present in physical memory;
> and 'paging' - which I would define as 'use fixed-size blocks'. (The third is
> more of an engineering thing, rather than high-level architecture, since it
> means you never have to 'shuffle' core, as systems that used variable-sized
> things seem to.)
> 
> 'Non-residence' is actually orthogonal to 'paging'; I can imagine a paging
> system which didn't support non-residence, and vice versa (either swapping
> the entire virtual address space, or doing it a segment at a time if the
> system has segments).

Yeah. And non-residence, as you call it, is what demand paging is.
Pages are brought into physical memory on demand.

The definition you use for paging here is an interesting point. Not sure 
how much of a thing it would be, but it might be a thing sometimes as 
well. But then we get into the question I asked further down, how do you 
then deal with when you have different sized pages on a system?
But "paging" just meaning "fixed size pages" is certainly a distinction 
you can make. I think I wouldn't find much use for it, but that don't 
mean others might not, or that it wouldn't be a possible classification.

>      > each process would have to be aware of all the other processes that use
>      > memory, and make sure that no two processes try to use the same memory,
>      > or chaos ensues.
> 
> There's also the System 360 approach, where processes share a single address
> space (physical memory - no virtual memory on them!), but it uses protection
> keys on memory 'chunks' (not sure of the correct IBM term) to ensure that one
> process can't tromp on another's memory.

Oh. There is no real connection between virtual memory and memory 
protection. One can exist with or without the other.

>      >> a memory management device for the PDP-11 which provided 'real' paging,
>      >> the KT11-B?
> 
>      > have never read any technical details. Interesting read.
> 
> Yes, we were lucky to be able to retrieve detailed info on it! A PDP-11/20
> sold on eBay with a fairly complete set of KT11-B documentation, and allegedly
> a "KT11-B" as well, but alas, it turned out to 'only' be an RK11-C. Not that
> RK11-C's aren't cool, but on the 'cool scale' they are like 3, whereas a
> KT11-B would have been, like, 17! :-) Still, we managed to get the KT11-B
> 'manual' (such as it is) and prints online.

Really cool. Thanks for sharing.

> I'd love to find out equivalent detail for the KT11-A, but I've never seen
> anything on it. (And I've appealed before for the KS11, which an early PDP-11
> Unix apparently used, but no joy.)

Might have been just some internal early attempt that never got out of DEC?

>      > But how do you then view modern architectures which have different sized
>      > pages? Are they no longer pages then?
> 
> Actually, there is precedent for that. The original Multics hardware, the
> GE-645, supported two page sizes. That was dropped in later machines (the
> Honeywell 6000's) since it was decided that the extra complexity wasn't worth
> it.
> 
> I don't have any problem with several different page sizes, _if it makes
> engineering sense to support them_. (I assume that the rationale for their
> re-introduction is that in the age of 64-bit machines, page tables for very
> large 'chunks' can be very large if pages of ~1K or so are used, or something
> like.)
> 
> It does make real memory allocation (one of the advantages of paging) more
> difficult, since there would now be small and large page frames. Although I
> suppose it wouldn't be hard to coalesce them, if there are only two sizes, and
> one's a small power-of-2 multiple of the other - like 'fragments' in the
> Berkeley Fast File System for BSD4.2.

So, would you then say that such machines do not have pages, but have 
segments?
Or where do you draw the line? Is it some function of how many different 
sized pages there can be before you would call it segments? ;-)

But yes, it can make life more complex. But somewhere, this is all just 
math, and computers are usually pretty good with that.

> I have a query, though - how does a system with two page sizes know which to
> use? On Multics (and probably on the x86), it's a per-segment attribute. But
> on a system with a large, flat address space, how does the system know which
> parts of it are using small pages, and which large?

Beats me. I always just assumed it would be related to the virtual 
address. Some part of the address space use small pages, some other part 
uses large. But I have not ever cared to actually find out.
Just as some machines had I/O spaced mapped to several addresses, so 
that you could hit things cached or uncached, and possibly read/write 
different sized data efficiently without lots of fiddling.

   Johnny

-- 
Johnny Billquist                  || "I'm on a bus
                                   ||  on a psychedelic trip
email: bqt at softjar.se             ||  Reading murder books
pdp is alive!                     ||  tryin' to stay hip" - B. Idol


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-30 15:05 Noel Chiappa
@ 2018-04-30 16:43 ` Paul Winalski
  2018-04-30 21:41 ` Johnny Billquist
  2018-05-03  2:54 ` Charles Anthony
  2 siblings, 0 replies; 105+ messages in thread
From: Paul Winalski @ 2018-04-30 16:43 UTC (permalink / raw)


On 4/30/18, Noel Chiappa <jnc at mercury.lcs.mit.edu> wrote:
>
> There's also the System 360 approach, where processes share a single
> address
> space (physical memory - no virtual memory on them!), but it uses
> protection
> keys on memory 'chunks' (not sure of the correct IBM term) to ensure that
> one
> process can't tromp on another's memory.

IBM always used the term "storage" rather than "memory".  The Storage
Protection feature for System/360 was optional on some models, and
provided a 4-bit protection key for each 2048-byte block (IBM's term)
of physical storage, allowing for up to 15 processes to be executing
simultaneously (key value 0 disabled storage protection).

The System/370 operating systems DOS/VS, OS/VS1, and OS/VS2 SVS all
had a single, demand-paged virtual address space, usually a few times
larger than the physical memory.  For DOS/VS the virtual address space
was partitioned into a space (mapped virtual=physical address)
starting at address 0 for the OS, then up to five user program
partitions.  Programs were linked to run in one of these partitions.
OS/VS1 (successor to OS/360 MFT) also had a single virtual address
space, but it had more partitions and the program loader could
dynamically relocate applications to run in any of the partitions.
OS/VS2 SVS (first successor to OS/360 MVT) had a single virtual
address space, but dynamically allocated partitions as needed.  OS/VS2
MVS had processes in the modern OS sense--each executing program got
its own virtual address space.

-Paul W.


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
@ 2018-04-30 15:05 Noel Chiappa
  2018-04-30 16:43 ` Paul Winalski
                   ` (2 more replies)
  0 siblings, 3 replies; 105+ messages in thread
From: Noel Chiappa @ 2018-04-30 15:05 UTC (permalink / raw)


    > From: Johnny Billquist

    > This is where I disagree. The problem is that the chunks in the PDP-11
    > do not describe things from a zero offset, while a segment do. Only
    > chunk 0 is describing addresses from a 0 offset. And exactly which chunk
    > is selected is based on the virtual address, and nothing else.

Well, you have something of a point, but... it depends on how you look at it.
If you think of a PDP-11 address as holding two concatenated fields (3 bits of
'segment' and 13 bits of 'offset within segment'), not so much.

IIRC there are other segmented machines that do things this way - I can't
recall the details of any off the top of my head. (Well, there the KA10/KI10,
with their writeable/write-protected 'chunks', but that's a bit of a
degenerate case. I'm sure there is some segmented machine that works that way,
but I can't recall it.)

BTW, this reminds me of another key differentiator between paging and
segments, which is that paging was originally _invisible_ to the user (except
for setting the total size of the process), whereas segmentation is explicitly
visible to the user.

I think there is at least one PDP-11 OS which makes the 'chunks' visible to
the user - MERT (and speaking of which, I need to get back on my project of
trying to track down source/documentation for it).


    > Demand paging really is a separate thing from virtual memory. It's a
    > very bad thing to try and conflate the two.

Really? I always worked on the basis that the two terms were synonyms - but
I'm very open to the possibility that there is a use to having them have
distinct meanings.

I see a definition of 'virtual memory' below, but what would you use for
'paging'?

Now that I think about it, there are actually _three_ concepts: 'virtual
memory', as you define it; what I will call 'non-residence' - i.e. a process
can run without _all_ of its virtual memory being present in physical memory;
and 'paging' - which I would define as 'use fixed-size blocks'. (The third is
more of an engineering thing, rather than high-level architecture, since it
means you never have to 'shuffle' core, as systems that used variable-sized
things seem to.)

'Non-residence' is actually orthogonal to 'paging'; I can imagine a paging
system which didn't support non-residence, and vice versa (either swapping
the entire virtual address space, or doing it a segment at a time if the
system has segments).

    > There is nothing about virtual memory that says that you do not have to
    > have all of your virtual memory mapped to physical memory when the
    > process is running.

True.

    > Virtual memory is just *virtual* memory. It's not "real" or physical in
    > the sense that it has a dedicated location in physical memory, which
    > would be the same for all processes talking about that memory
    > address. Instead, each process has its own memory, which might be mapped
    > somewhere in physical memory, but it might also not be.

OK so far.

    > each process would have to be aware of all the other processes that use
    > memory, and make sure that no two processes try to use the same memory,
    > or chaos ensues.

There's also the System 360 approach, where processes share a single address
space (physical memory - no virtual memory on them!), but it uses protection
keys on memory 'chunks' (not sure of the correct IBM term) to ensure that one
process can't tromp on another's memory.


    >> a memory management device for the PDP-11 which provided 'real' paging,
    >> the KT11-B?

    > have never read any technical details. Interesting read.

Yes, we were lucky to be able to retrieve detailed info on it! A PDP-11/20
sold on eBay with a fairly complete set of KT11-B documentation, and allegedly
a "KT11-B" as well, but alas, it turned out to 'only' be an RK11-C. Not that
RK11-C's aren't cool, but on the 'cool scale' they are like 3, whereas a
KT11-B would have been, like, 17! :-) Still, we managed to get the KT11-B
'manual' (such as it is) and prints online.

I'd love to find out equivalent detail for the KT11-A, but I've never seen
anything on it. (And I've appealed before for the KS11, which an early PDP-11
Unix apparently used, but no joy.)


    > But how do you then view modern architectures which have different sized
    > pages? Are they no longer pages then?

Actually, there is precedent for that. The original Multics hardware, the
GE-645, supported two page sizes. That was dropped in later machines (the
Honeywell 6000's) since it was decided that the extra complexity wasn't worth
it.

I don't have any problem with several different page sizes, _if it makes
engineering sense to support them_. (I assume that the rationale for their
re-introduction is that in the age of 64-bit machines, page tables for very
large 'chunks' can be very large if pages of ~1K or so are used, or something
like.)

It does make real memory allocation (one of the advantages of paging) more
difficult, since there would now be small and large page frames. Although I
suppose it wouldn't be hard to coalesce them, if there are only two sizes, and
one's a small power-of-2 multiple of the other - like 'fragments' in the
Berkeley Fast File System for BSD4.2.

I have a query, though - how does a system with two page sizes know which to
use? On Multics (and probably on the x86), it's a per-segment attribute. But
on a system with a large, flat address space, how does the system know which
parts of it are using small pages, and which large?

      Noel


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-29 16:34   ` Steve Nickolas
@ 2018-04-29 16:48     ` Warner Losh
  0 siblings, 0 replies; 105+ messages in thread
From: Warner Losh @ 2018-04-29 16:48 UTC (permalink / raw)


On Sun, Apr 29, 2018 at 10:34 AM, Steve Nickolas <usotsuki at buric.co> wrote:

> On Sun, 29 Apr 2018, Johnny Billquist wrote:
>
> Right. The I think we're on the same page. I won't comment further on the
>> x86 which I mentioned before, since I think we can all agree that that one
>> is really brain damaged.
>>
>
> I'm not sure whether the 65C816 might actually be *more* brain-damaged in
> its segmentation than the 8088.  That said, I suspect the 8088 is worse
> because of the bitshift.
>

For the 8088, it depends. If you don't care about no memory protection,
then the segments make a nice fairly granular way to slice up the space
between 'processes'. Old-school Unix for the 8088 used 'small model' to get
relocatable binaries anywhere in the 1MB space. For that use case, it's
quite convenient.

Of course, for anything else, or if you want memory protection, or demand
pages, or well a bunch of other stuff here, it totally sucks.

Warner
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180429/31b8edb1/attachment.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-29 15:37 ` Johnny Billquist
@ 2018-04-29 16:34   ` Steve Nickolas
  2018-04-29 16:48     ` Warner Losh
  0 siblings, 1 reply; 105+ messages in thread
From: Steve Nickolas @ 2018-04-29 16:34 UTC (permalink / raw)


On Sun, 29 Apr 2018, Johnny Billquist wrote:

> Right. The I think we're on the same page. I won't comment further on the x86 
> which I mentioned before, since I think we can all agree that that one is 
> really brain damaged.

I'm not sure whether the 65C816 might actually be *more* brain-damaged in 
its segmentation than the 8088.  That said, I suspect the 8088 is worse 
because of the bitshift.

-uso.


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-28 20:40 Noel Chiappa
@ 2018-04-29 15:37 ` Johnny Billquist
  2018-04-29 16:34   ` Steve Nickolas
  0 siblings, 1 reply; 105+ messages in thread
From: Johnny Billquist @ 2018-04-29 15:37 UTC (permalink / raw)


On 2018-04-28 22:40, Noel Chiappa wrote:
>      > From: Johnny Billquist
> 
>      > Gah. If I were to try and collect every copy made, it would be quite a
>      > collection.
> 
> Well, just the 'processor handbook's (the little paperback things), I have
> about 30. (If you add devices, that probably doubles it.) I think my
> collection is complete.

I have no idea how many different editions DEC printed. There were 
certainly several done for some models, as well as several books that 
covered different combinations of models. All in all, 30 does not 
surprise me. And I would not be surprised if it were even more...

>      > So there was a total change in terminology early in the 11/45 life, it
>      > would appear. I wonder why. ... I probably would not blame some market
>      > droids.
> 
> I was joking, but also serious. I really do think it was most likely
> marketing-driven. (See below for why I don't think it was engineering-driven,
> which leaves....)
> 
> I wonder if there's anything in the DEC archives (a big chunk of which are now
> at the CHM) which would shed any light? Some of the archives are online there,
> e.g.:
> 
>    http://www.bitsavers.org/pdf/dec/pdp11/memos/
> 
> but it seems to be mostly engineering (although there's some that would be
> characterized as marketing).

Yeah. I've read a bunch of those memos in the past. I don't think they 
in general were concerned with terminology, so I'm not surprised if you 
don't find anything relevant there.

>      > one of the most important differences between segmentation and pages are
>      > that with segmentation you only have one contiguous range of memory,
>      > described by a base and a length register. This will be a contiguous
>      > range of memory both in virtual memory, and in physical memory.
> 
> I agree completely (although I extend it to multiple segments, each of which
> has the characterstics you describe).

Right. The I think we're on the same page. I won't comment further on 
the x86 which I mentioned before, since I think we can all agree that 
that one is really brain damaged.

> Which is why I think the original DEC nomenclature for the PDP-11's memory
> management was more appropriate - the description above is _exactly_ the
> functionality provided for each of the 8 'chunks' (to temporarily use a
> neutral term) of PDP-11 address space, which don't quack like most other
> 'pages' (to use the 'if it quacks like a duck' standard).

This is where I disagree. The problem is that the chunks in the PDP-11 
do not describe things from a zero offset, while a segment do. Only 
chunk 0 is describing addresses from a 0 offset. And exactly which chunk 
is selected is based on the virtual address, and nothing else.

It's a good analogy to view segments in the same way as you might view 
them in object files, or sometimes even in executables. One segment is a 
more or less self contained chunk, described by one segment descriptor, 
or whatever you want to call them.
This is not possible to do on a PDP-11 with the chunks we have.

> One query I have comes from the usual goal of 'virtual memory' (which is the
> concept most tightly associated with 'pages'), which is to allow a process to
> run without all of its pages in physical memory.
> 
> I don't know much about PDP-11 DEC OS's, but do any of them do this? (I.e.
> allow partial residency.)  If not, that would be ironic (in view of the later
> name) - and, I think, evidence that the PDP-11 'chunks' aren't really pages.

Demand paging really is a separate thing from virtual memory. It's a 
very bad thing to try and conflate the two.

There is nothing about virtual memory that says that you do not have to 
have all of your virtual memory mapped to physical memory when the 
process is running. Virtual memory is just *virtual* memory. It's not 
"real" or physical in the sense that it has a dedicated location in 
physical memory, which would be the same for all processes talking about 
that memory address. Instead, each process has its own memory, which 
might be mapped somewhere in physical memory, but it might also not be. 
And one processes address 0 is not the same as another processes address 
0. They both have the illusion that they have the full memory address 
range to them selves, unaware of the fact that there are many processes 
who also have that same illusion. And meanwhile, the OS has it's own 
idea about the memory, and the OS also knows how the physical memory is 
used, and keeps resources, and the allocation of them, under control on 
behalf of the processes, without the processes knowing this.
Without virtual memory, each process would have to deal with physical 
memory, and then each process would have to be aware of all the other 
processes that use memory, and make sure that no two processes try to 
use the same memory, or chaos ensues.

You can do virtual memory with segmentation as well. Since this also 
means have a translation between your virtual address and a physical 
address. It's just that the translations are a bit easier and more 
straight forward with segments.

> BTW, did you know that prior to the -11/45, there was a memory management
> device for the PDP-11 which provided 'real' paging, the KT11-B? More here:
> 
>    http://gunkies.org/wiki/KT11-B_Paging_Option

I knew about it, but have never read any technical details. Interesting 
read.

> I seem to recall some memos in the memo archive that discussed it; I _think_
> it mentioned why they decided not to go that way in doing memory management
> for the /45, but I forget the details? (Maybe the performance hit of keeping
> the page tables in main memory was significant?)_

There might have been several reasons, but performance would most likely 
be one of them.

>      > With segmentation you cannot have your virtual memory split up and
>      > spread out over physical memory.
> 
> Err, Multics did that; the process' address space was split up into many
> segments (a true 2-dimensional naming system, with 18 bits of segment number),
> which were then split up into pages, for both virtual memory ('not all
> resident'), and for physical memory allocation.
> 
> Although I suppose one could view that as two separate, sequential steps -
> i.e. i) the division into segments, and ii) the division of segments into
> pages. In fact, I take this approach in describing the Multics memory system,
> since it's easier to understand as two independent things.

Yes. I definitely view that as two independent things stacked on top of 
each other. And it would appear you do too. :-)

And that allows such a spread, as well as holes, through the paging 
part. The segmentation part does not allow for it.

>      > it would be very wrong to call what the PDP-11 have segmentation
> 
> The problem is that what PDP-11 memory management does isn't really like
> _either_ segmentation, or paging, as practised in other machines. With only 8
> chunks, it's not like Multics etc, which have very large address spaces split
> up into many segments. (And maybe _that_'s why the name was changed - when
> people heard 'segments' they thought 'lots of them'.)
> 
> However, it's not like paging on effectively all other systems with paging,
> because in them paging's used to provide virtual memory (in the sense of 'the
> process runs with pages missing from real memory'), and to make memory
> allocation simple by use of fixed-size page frames.
> 
> So any name given PDP-11 'chunks' is going to have _some_ problems. It just
> thing 'segmentation' (as you defined it at the top) is a better fit than the
> alternative...

Agreed that there is some problem in classifying them either way. 
However, the objection against pages are solely based on the fact that 
they are not fixed in size, while with segments, it becomes much more 
strange.

But how do you then view modern architectures which have different sized 
pages? Are they no longer pages then?

   Johnny

-- 
Johnny Billquist                  || "I'm on a bus
                                   ||  on a psychedelic trip
email: bqt at softjar.se             ||  Reading murder books
pdp is alive!                     ||  tryin' to stay hip" - B. Idol


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
@ 2018-04-28 20:40 Noel Chiappa
  2018-04-29 15:37 ` Johnny Billquist
  0 siblings, 1 reply; 105+ messages in thread
From: Noel Chiappa @ 2018-04-28 20:40 UTC (permalink / raw)


    > From: Johnny Billquist

    > Gah. If I were to try and collect every copy made, it would be quite a
    > collection.

Well, just the 'processor handbook's (the little paperback things), I have
about 30. (If you add devices, that probably doubles it.) I think my
collection is complete.


    > So there was a total change in terminology early in the 11/45 life, it
    > would appear. I wonder why. ... I probably would not blame some market
    > droids.

I was joking, but also serious. I really do think it was most likely
marketing-driven. (See below for why I don't think it was engineering-driven,
which leaves....)

I wonder if there's anything in the DEC archives (a big chunk of which are now
at the CHM) which would shed any light? Some of the archives are online there,
e.g.:

  http://www.bitsavers.org/pdf/dec/pdp11/memos/

but it seems to be mostly engineering (although there's some that would be
characterized as marketing).


    > one of the most important differences between segmentation and pages are
    > that with segmentation you only have one contiguous range of memory,
    > described by a base and a length register. This will be a contiguous
    > range of memory both in virtual memory, and in physical memory.

I agree completely (although I extend it to multiple segments, each of which
has the characterstics you describe).

Which is why I think the original DEC nomenclature for the PDP-11's memory
management was more appropriate - the description above is _exactly_ the
functionality provided for each of the 8 'chunks' (to temporarily use a
neutral term) of PDP-11 address space, which don't quack like most other
'pages' (to use the 'if it quacks like a duck' standard).


One query I have comes from the usual goal of 'virtual memory' (which is the
concept most tightly associated with 'pages'), which is to allow a process to
run without all of its pages in physical memory.

I don't know much about PDP-11 DEC OS's, but do any of them do this? (I.e.
allow partial residency.)  If not, that would be ironic (in view of the later
name) - and, I think, evidence that the PDP-11 'chunks' aren't really pages.


BTW, did you know that prior to the -11/45, there was a memory management
device for the PDP-11 which provided 'real' paging, the KT11-B? More here:

  http://gunkies.org/wiki/KT11-B_Paging_Option

I seem to recall some memos in the memo archive that discussed it; I _think_
it mentioned why they decided not to go that way in doing memory management
for the /45, but I forget the details? (Maybe the performance hit of keeping
the page tables in main memory was significant?)_


    > With segmentation you cannot have your virtual memory split up and
    > spread out over physical memory.

Err, Multics did that; the process' address space was split up into many
segments (a true 2-dimensional naming system, with 18 bits of segment number),
which were then split up into pages, for both virtual memory ('not all
resident'), and for physical memory allocation.

Although I suppose one could view that as two separate, sequential steps -
i.e. i) the division into segments, and ii) the division of segments into
pages. In fact, I take this approach in describing the Multics memory system,
since it's easier to understand as two independent things.

    > You can also have "holes" in your memory, with pages that are invalid,
    > yet have pages higher up in your memory .. Something that is impossible
    > with segmentation, since you only have one set of registers for each
    > memory type (at most) in a segmented memory implementation.

You seem to be thinking of segmentation a la Intel 8086, which is a hack they
added to allow use of more memory (although I suspect that PDP-11 chunks were
a hack of a similar flavour).

At the time we are speaking of, the Intel 8086 did not exist (it came along
quite few years later). The systems which supported segmentation, such as
Multics, the Burroughs 5000 and successors, etc had 'real' segmentation, with
a full two-dimensional naming system for memory. (Burroughs 5000 segment
numbers were 10 bits wide.)

    > I mean, when people talk about segmented memory, what most everyone
    > today thinks of is the x86 model, where all of this certainly is true.

It's also (IMNSHO) irrelevant to this. Intel's brain-damage is not the
entirety of computer science (or shouldn't be).

(BTW, later Intel xx86 machines did allow you have to 'holes' in segments, via
the per-segment page tables.)


    > it would be very wrong to call what the PDP-11 have segmentation

The problem is that what PDP-11 memory management does isn't really like
_either_ segmentation, or paging, as practised in other machines. With only 8
chunks, it's not like Multics etc, which have very large address spaces split
up into many segments. (And maybe _that_'s why the name was changed - when
people heard 'segments' they thought 'lots of them'.)

However, it's not like paging on effectively all other systems with paging,
because in them paging's used to provide virtual memory (in the sense of 'the
process runs with pages missing from real memory'), and to make memory
allocation simple by use of fixed-size page frames.

So any name given PDP-11 'chunks' is going to have _some_ problems. It just
thing 'segmentation' (as you defined it at the top) is a better fit than the
alternative...

   Noel


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-28 10:41 ` Johnny Billquist
@ 2018-04-28 12:19   ` Rico Pajarola
  0 siblings, 0 replies; 105+ messages in thread
From: Rico Pajarola @ 2018-04-28 12:19 UTC (permalink / raw)


On Sat, Apr 28, 2018 at 12:41 PM, Johnny Billquist <bqt at update.uu.se> wrote:

> On 2018-04-28 02:19, Noel Chiappa wrote:
>
>> I have a spare copy of the '72 /45 handbook; send me your address, and
>> I'll
>> send it along. (Every PDP-11 fan should have a copy of every edition of
>> every
>> model's handbooks... :-)
>>
>
> Gah. If I were to try and collect every copy made, it would be quite a
> collection. Thanks for the offer. If you really want to get rid of one, and
> is willing to send to Switzerland, then I'll take it in a private mail. But
> you don't really have to.

Noel, If you can send it to California within the next 7 days, I can hand
deliver it to Johnny in Switzerland
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180428/d323a0b1/attachment.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-28  0:19 Noel Chiappa
@ 2018-04-28 10:41 ` Johnny Billquist
  2018-04-28 12:19   ` Rico Pajarola
  0 siblings, 1 reply; 105+ messages in thread
From: Johnny Billquist @ 2018-04-28 10:41 UTC (permalink / raw)


On 2018-04-28 02:19, Noel Chiappa wrote:
>      > From: Johnny Billquist
> 
>      > For 1972 I only found the 11/40 handbook.
> 
> I have a spare copy of the '72 /45 handbook; send me your address, and I'll
> send it along. (Every PDP-11 fan should have a copy of every edition of every
> model's handbooks... :-)

Gah. If I were to try and collect every copy made, it would be quite a 
collection. Thanks for the offer. If you really want to get rid of one, 
and is willing to send to Switzerland, then I'll take it in a private 
mail. But you don't really have to.

> In the meantime, I'm too lazy to scan the whole thing, but here's the first
> page of Chapter 6 from the '72:
> 
>    http://ana-3.lcs.mit.edu/~jnc/tech/pdp11/jpg/tmp/PDP11145ProcHbook72pg6-1.jpg

Cool. Thanks. That is a very different terminology. The MMU is not even 
called the MMU, but the "Memory Segmentation Unit".
Very interesting.

>      > went though the 1972 Maintenance Reference Manual for the 11/45. That
>      > one also says "page". :-)
> 
> There are a few remnant relics of the 'segment' phase, e.g. here:
> 
>    http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/sys/conf/m45.s
> 
> which has this comment:
> 
>    / turn on segmentation
> 
> Also, if you look at the end, you'll see SSR0, SSR1 etc (as per the '72
> handbook), instead of the later SR0, SR1, etc.

So there was a total change in terminology early in the 11/45 life, it 
would appear. I wonder why.
I can only speculate, but I probably would not blame some market droids.
Trying to think about this, I feel that one of the most important 
differences between segmentation and pages are that with segmentation 
you only have one contiguous range of memory, described by a base and a 
length register. This will be a contiguous range of memory both in 
virtual memory, and in physical memory. With segmentation you cannot 
have your virtual memory split up and spread out over physical memory. 
You can also, pretty much, arbitrarily point where in physical memory 
your virtual memory starts and ends.

With pages, this is obviously not the case anymore. The PDP-11 do have 
the ability to have pages start at close to arbitrary addresses in 
physical memory, but you can certainly have your virtual memory spread 
out over different places in physical memory. Your virtual memory is 
also not just described by a base and length, as you have 8 pages, 
starting at fixed virtual memory addresses. You can also have "holes" in 
your memory, with pages that are invalid, yet have pages higher up in 
your virtual memory which are valid. Something that is impossible with 
segmentation, since you only have one set of registers for each memory 
type (at most) in a segmented memory implementation.

I mean, when people talk about segmented memory, what most everyone 
today thinks of is the x86 model, where all of this certainly is true.

So, by this definition, it would be very wrong to call what the PDP-11 
have segmentation. And I would suspect that to be the reason why DEC 
changed their terminology. Segmentation simply started to become an 
established term, and it did not match what the PDP-11 did, so the 
documentation had to change to better describe what it was.

   Johnny

-- 
Johnny Billquist                  || "I'm on a bus
                                   ||  on a psychedelic trip
email: bqt at softjar.se             ||  Reading murder books
pdp is alive!                     ||  tryin' to stay hip" - B. Idol


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
@ 2018-04-28  0:19 Noel Chiappa
  2018-04-28 10:41 ` Johnny Billquist
  0 siblings, 1 reply; 105+ messages in thread
From: Noel Chiappa @ 2018-04-28  0:19 UTC (permalink / raw)


    > From: Johnny Billquist

    > For 1972 I only found the 11/40 handbook.

I have a spare copy of the '72 /45 handbook; send me your address, and I'll
send it along. (Every PDP-11 fan should have a copy of every edition of every
model's handbooks... :-)

In the meantime, I'm too lazy to scan the whole thing, but here's the first
page of Chapter 6 from the '72:

  http://ana-3.lcs.mit.edu/~jnc/tech/pdp11/jpg/tmp/PDP11145ProcHbook72pg6-1.jpg


    > went though the 1972 Maintenance Reference Manual for the 11/45. That
    > one also says "page". :-)

There are a few remnant relics of the 'segment' phase, e.g. here:

  http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/sys/conf/m45.s

which has this comment:

  / turn on segmentation

Also, if you look at the end, you'll see SSR0, SSR1 etc (as per the '72
handbook), instead of the later SR0, SR1, etc.

	Noel



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-27 23:10 ` Johnny Billquist
@ 2018-04-27 23:39   ` Warner Losh
  0 siblings, 0 replies; 105+ messages in thread
From: Warner Losh @ 2018-04-27 23:39 UTC (permalink / raw)


On Fri, Apr 27, 2018 at 5:10 PM, Johnny Billquist <bqt at update.uu.se> wrote:

> On 2018-04-28 01:01, Noel Chiappa wrote:
>
>>      > From: Johnny Billquist
>>
>>      >> Well, the 1972 edition of the -11/45 processor handbook
>>                     ^^
>>
>>      > It would be nice if you actually could point out where this is the
>>      > case. I just went through that 1973 PDP-11/45 handbook
>>                                         ^^
>>
>> Yes, the '73 one (red/purple cover) had switched. It's only the '72 one
>> (red/white cover) that says 'segments'.
>>
>
>
> Yes, I know you said 1972, and I said 1973.
> For 1972 I only found the 11/40 handbook. But, as I said, that handbook
> also says "page". If you have some link to the version you refer to, it
> would be interesting.
>
> Told you I searched around in different handbooks from that era, and found
> none who said segments.


The OS Book I used in college in the late 80's said 'segments' to describe
the PDP-11 MMU. At least that's my memory, I can't even recall the name of
the OS book, and my cache of college textbooks has gone to good will years
ago. IIRC, it was because they were so large relative to the VAX and made
'page demand' difficult to implement.

Warner
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180427/6fa4465c/attachment.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-27 23:01 Noel Chiappa
@ 2018-04-27 23:10 ` Johnny Billquist
  2018-04-27 23:39   ` Warner Losh
  0 siblings, 1 reply; 105+ messages in thread
From: Johnny Billquist @ 2018-04-27 23:10 UTC (permalink / raw)


On 2018-04-28 01:01, Noel Chiappa wrote:
>      > From: Johnny Billquist
> 
>      >> Well, the 1972 edition of the -11/45 processor handbook
>                     ^^
> 
>      > It would be nice if you actually could point out where this is the
>      > case. I just went through that 1973 PDP-11/45 handbook
>                                         ^^
> 
> Yes, the '73 one (red/purple cover) had switched. It's only the '72 one
> (red/white cover) that says 'segments'.


Yes, I know you said 1972, and I said 1973.
For 1972 I only found the 11/40 handbook. But, as I said, that handbook 
also says "page". If you have some link to the version you refer to, it 
would be interesting.

Told you I searched around in different handbooks from that era, and 
found none who said segments.

   Johnny

-- 
Johnny Billquist                  || "I'm on a bus
                                   ||  on a psychedelic trip
email: bqt at softjar.se             ||  Reading murder books
pdp is alive!                     ||  tryin' to stay hip" - B. Idol


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
@ 2018-04-27 23:01 Noel Chiappa
  2018-04-27 23:10 ` Johnny Billquist
  0 siblings, 1 reply; 105+ messages in thread
From: Noel Chiappa @ 2018-04-27 23:01 UTC (permalink / raw)


    > From: Johnny Billquist

    >> Well, the 1972 edition of the -11/45 processor handbook
                   ^^

    > It would be nice if you actually could point out where this is the
    > case. I just went through that 1973 PDP-11/45 handbook
                                       ^^

Yes, the '73 one (red/purple cover) had switched. It's only the '72 one
(red/white cover) that says 'segments'.

	   Noel


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
       [not found] <mailman.1.1524708001.6296.tuhs@minnie.tuhs.org>
@ 2018-04-27 22:41 ` Johnny Billquist
  0 siblings, 0 replies; 105+ messages in thread
From: Johnny Billquist @ 2018-04-27 22:41 UTC (permalink / raw)


On 2018-04-26 04:00, jnc at mercury.lcs.mit.edu  (Noel Chiappa) wrote:
>      > From: Johnny Billquist
> 
>      > if you hadn't had the ability for them to be less than 8K, you wouldn't
>      > even try that argument.
> 
> Well, the 1972 edition of the -11/45 processor handbook called them segments..:-)

I think we had this argument before as well. It would be nice if you 
actually could point out where this is the case. I just went through 
that 1973 PDP-11/45 handbook, and all it says are "page" everywhere I look.
I also checked the 1972 PDP-11/40 handbook, and except for one mention 
of "segment" in the introduction part of the handbook, which is not even 
clear if it actually specifically refers to the MMU capabilities, that 
handbook also use the word "page" everywhere.

I also checked the PDP-11/20 handbook, but that one does not even cover 
any MMU, so no mention of neither "page" nor "segment" can be found.

> I figure some marketing droid found out that 'paging' was the new buzzword, and
> changed the name...:-)  :-)

Somehow I doubt it, but looking forward to your references... :-)

   Johnny

-- 
Johnny Billquist                  || "I'm on a bus
                                   ||  on a psychedelic trip
email: bqt at softjar.se             ||  Reading murder books
pdp is alive!                     ||  tryin' to stay hip" - B. Idol


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-25 22:24 ` Johnny Billquist
@ 2018-04-26  5:51   ` Tom Ivar Helbekkmo
  0 siblings, 0 replies; 105+ messages in thread
From: Tom Ivar Helbekkmo @ 2018-04-26  5:51 UTC (permalink / raw)


Johnny Billquist <bqt at update.uu.se> writes:

> Uh... DEC STD 144 does not have anything to do with remapping bad
> blocks to replacement good blocks. DEC STD 144 describes how a media
> stores known bad blocks on a disk, so that file system initialization
> can then take whatever action needed to that these blocks are not used
> by anything. In RSX (and VMS), they will be included in a file called
> BADBLK.SYS, and thus be perceived as "used".

Thanks for the clarification, Johnny!

As with so much else these days, DEC STD 144 is available on-line:
https://archive.org/details/bitsavers_decstandar_576622
Interesting reading -- and the scheme used in RSX and VMS is exactly
as required by the standard in section 4.1: "Small Operating System
Conformance".

-tih
-- 
Most people who graduate with CS degrees don't understand the significance
of Lisp.  Lisp is the most important idea in computer science.  --Alan Kay
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 487 bytes
Desc: not available
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180426/f99d6cb9/attachment.sig>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
@ 2018-04-26  1:53 Noel Chiappa
  0 siblings, 0 replies; 105+ messages in thread
From: Noel Chiappa @ 2018-04-26  1:53 UTC (permalink / raw)


    > From: Johnny Billquist

    > if you hadn't had the ability for them to be less than 8K, you wouldn't
    > even try that argument.

Well, the 1972 edition of the -11/45 processor handbook called them segments.. :-)

I figure some marketing droid found out that 'paging' was the new buzzword, and
changed the name... :-) :-)

	Noel


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
       [not found] <mailman.143.1524696952.3788.tuhs@minnie.tuhs.org>
@ 2018-04-25 23:08 ` Johnny Billquist
  0 siblings, 0 replies; 105+ messages in thread
From: Johnny Billquist @ 2018-04-25 23:08 UTC (permalink / raw)


On 2018-04-26 00:55, jnc at mercury.lcs.mit.edu  (Noel Chiappa) wrote:
>      > From: Johnny Billquist
> 
>      > PDP-11 have 8K pages.
> 
> Segments.:-)  (This is an old argument between Johnny and me, I'm not trying
> to re-open it, just yanking his chain...:-)

:-)
And if you hadn't had the ability for them to be less than 8K, you 
wouldn't even try that argument. But just because the hardware gives you 
some extra capabilities, you suddenly want to associate them with a 
technology that really gives you much less capabilities.

Either way, the next page always start at the next 8K boundary.

>      > On a PDP-11, all your virtual memory was always there when the process
>      > was on the CPU
> 
> In theory, at least (I don't know of an OS that made use of this), didn't the
> memory management hardware allow the possibility to do demand-paging? I note
> that Access Control Field value 0 is "non-resident".

Oh yes. You definitely could do demand paging based on the hardware 
capabilities.

> Unix kinda-sorta used this stuff, to automatically extend the stack when the
> user ran off the end of it (causing a trap).

Ah. Good point. The same is also true for brk, even though that is an 
explicit request to grow your memory space at the other side.

DEC OSes had the brk part as well, but stack was not automatically 
extended if needed. DEC liked to have the stack at the low end of 
address space, and have hardware that trapped if the stack grew below 
400 (octal).

>      > you normally did not have demand paging, since that was not really
>      > gaining you much on a PDP-11
> 
> Especially on the later machines, with more than 256KB of hardware main
> memory. Maybe it might have been useful on the earlier ones (e.g. the -11/45).

Yeah, it would actually probably have been more useful on an 11/45.

   Johnny

-- 
Johnny Billquist                  || "I'm on a bus
                                   ||  on a psychedelic trip
email: bqt at softjar.se             ||  Reading murder books
pdp is alive!                     ||  tryin' to stay hip" - B. Idol


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-25 20:29               ` Paul Winalski
  2018-04-25 20:45                 ` Larry McVoy
@ 2018-04-25 23:01                 ` Bakul Shah
  1 sibling, 0 replies; 105+ messages in thread
From: Bakul Shah @ 2018-04-25 23:01 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1095 bytes --]

This is what we did at Fortune Systems for our 68k based v7 system.
There was an external “mmu” which added a base value to a 16 bit virtual
address to compute a physical address. And compared against a limit.
There were four base,limit pairs that you had to rewrite to context switch:
Text, data, spare and stack. At a minimum the system shipped with 256KB
so you could have a number of processes memory resident. You swapped
out a complete segment when you ran out of space. 

I imagine other 16bit word size machines of that era used similar schemes.

> On Apr 25, 2018, at 1:29 PM, Paul Winalski <paul.winalski at gmail.com> wrote:
> 
> Some PDP-11 models had a virtual addressing feature called PLAS
> (Program Logical Address Space).  The PDP-11 had 16-bit addressing,
> allowing for at most 64K per process.  To take advantage of physical
> memory larger than 64K, PLAS allowed multiple 64K virtual address
> spaces to be mapped to the larger physical memory.  Sort of the
> reverse of the usual virtual addressing scheme, where there is more
> virtual memory than physical memory.



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
@ 2018-04-25 22:55 Noel Chiappa
  0 siblings, 0 replies; 105+ messages in thread
From: Noel Chiappa @ 2018-04-25 22:55 UTC (permalink / raw)


    > From: Johnny Billquist

    > PDP-11 have 8K pages.

Segments. :-) (This is an old argument between Johnny and me, I'm not trying
to re-open it, just yanking his chain... :-)


    > On a PDP-11, all your virtual memory was always there when the process
    > was on the CPU

In theory, at least (I don't know of an OS that made use of this), didn't the
memory management hardware allow the possibility to do demand-paging? I note
that Access Control Field value 0 is "non-resident".

Unix kinda-sorta used this stuff, to automatically extend the stack when the
user ran off the end of it (causing a trap).

    > you normally did not have demand paging, since that was not really
    > gaining you much on a PDP-11

Especially on the later machines, with more than 256KB of hardware main
memory. Maybe it might have been useful on the earlier ones (e.g. the -11/45).

	Noel


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
       [not found] <mailman.139.1524690859.3788.tuhs@minnie.tuhs.org>
@ 2018-04-25 22:54 ` Johnny Billquist
  0 siblings, 0 replies; 105+ messages in thread
From: Johnny Billquist @ 2018-04-25 22:54 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 2740 bytes --]

On 2018-04-25 23:14, Paul Winalski <paul.winalski at gmail.com> wrote:

> On 4/25/18, Ronald Natalie 
> <ron at ronnatalie.com> wrote:
>> The fun argument is what is Virtual Memory.    Typically, people align that
>> with paging but you can stretch the definition to cover paging.
>> This was a point of contention in the early VAX Unix days as the ATT (System
>> III, even V?) didn’t support paging on the VAX where as BSD did.
> In my book, virtual memory is any addressing scheme where the
> addresses that the program uses are different from the physical memory
> addresses.  Nowadays most OSes use a scheme where each process has its
> own virtual address space, and virtual memory is demand-paged from
> backing store on disk.  But there have been other schemes.

Yeah...

> Some PDP-11 models had a virtual addressing feature called PLAS
> (Program Logical Address Space).  The PDP-11 had 16-bit addressing,
> allowing for at most 64K per process.  To take advantage of physical
> memory larger than 64K, PLAS allowed multiple 64K virtual address
> spaces to be mapped to the larger physical memory.  Sort of the
> reverse of the usual virtual addressing scheme, where there is more
> virtual memory than physical memory.

Note that PLAS is not a PDP-11 hardware thing. PLAS was the name for the 
mechanism provided by the OS for applications to be able to access more 
than 64K of memory while still be limited by the virtual address space 
limit of 64K.
PLAS is in one way very similar to mmap, except that it's not backed by 
a file. But you create a memory region though the OS (giving it a name 
and a size, which can be more than 64K), and then you can map to it, 
specifying the offset into it and window size, as well as where to map 
to in your virtual address space.
This is realized by just using the pages of the MMU of the PDP-11 map to 
different parts and things.
Any OS that had the PLAS capability by necessity had to have an MMU, 
which was the hardware part that allowed this to be implemented.
So, all PDP-11s with an MMU could allow the OS running on it to provide 
the PLAS capabilities.

A PDP-11 in general is "reversed" in that the physical address space is 
much larger than the virtual one. Although, the same was also true on 
the VAX in the last iteration where the NVAX implementation allowed for 
a 34 bit physical address, while the virtual address space was still 
only 32 bits.

But that doesn't make the virtual memory any less virtual.

   Johnny

-- 
Johnny Billquist                  || "I'm on a bus
                                   ||  on a psychedelic trip
email: bqt at softjar.se             ||  Reading murder books
pdp is alive!                     ||  tryin' to stay hip" - B. Idol


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
@ 2018-04-25 22:46 Noel Chiappa
  0 siblings, 0 replies; 105+ messages in thread
From: Noel Chiappa @ 2018-04-25 22:46 UTC (permalink / raw)


    > From: Johnny Billquist

    > I don't know exactly why DEC left the last three tracks unused. Might
    > have been for diagnostic tools to have a scratch area to play with.
    > Might have been that those tracks were found to be less reliable. Or
    > maybe something completely different. But it was not for bad block
    > replacement, as DEC didn't even do that on RK05

The "pdp11 peripherals handbook" (1975 edition at least, I can't be bothered
to check them all) says, for the RK11:

  "Tracks/surface: 200+3 spare"

and for the RP11:

  "Tracks/surface: 400 (plus 6 spares)"

which sounds like it could be for bad block replacement, but the RP11-C
Maintenance Manual says (pg. 3-10) "the inner-most cylinders 400-405 are only
used for maintenance".

Unix blithely ignored all that, and used every block available on both the
RK11 and RP11.

     Noel


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
       [not found] <mailman.137.1524667148.3788.tuhs@minnie.tuhs.org>
  2018-04-25 21:43 ` Johnny Billquist
  2018-04-25 22:24 ` Johnny Billquist
@ 2018-04-25 22:37 ` Johnny Billquist
  2 siblings, 0 replies; 105+ messages in thread
From: Johnny Billquist @ 2018-04-25 22:37 UTC (permalink / raw)


On 2018-04-25 16:39, arnold at skeeve.com wrote:

> Tim Bradshaw<tfb at tfeb.org>  wrote:
> 
>> Do systems with huge pages page in the move-them-to-disk sense I wonder?
>> I assume they don't in practice because it would be insane but I wonder
>> if the VM system is in theory even willing to try.
> Why not? If there's enough backing store availble?
> 
> Note that many systems demand page-in the code section straight out of the
> executable, so if some of those pages aren't needed, they can just
> be released.  And said pages can be shared among all processes running
> the same executable, for further savings.

Right.

>> Something I never completely understood in the paging vs swapping
>> thing was that I think that systems which could page (well, 4.xBSD in
>> particular) would*also*  swap if pushed.  I think the reason for that was
>> that, if you were really short of memory, swapping freed up the process
>> structure and also the page tables &c for the process, which would still
>> be needed even if all its pages had been evicted.  Is that right?
> It depends upon the system. Some had pageable page tables, which is
> pretty hairy. Others didn't. I don't remember what 4BSD did on the
> Vax, but I suspect that the page tables and enough info to find everything
> on swap stayed in kernel memory.  (Where's Chris Torek when you need
> him?:-)

The pages tables describing the users memory space are themselves 
located in virtual memory on the VAX, so they can be paged out without 
problem. If you refer to an entry in the user page table, and that page 
itself is paged out, you'll get a page fault for the system page table, 
so you'll need to page in that page of the system.
But I seem to remember 4BSD (as well as NetBSD) keep all of the kernel 
in physical memory all the time, and don't page the kernel parts, 
including process page tables.

> But yes, swapping was generally used to free up large amounts of memory
> if under heavy load.

Paging would free up the same amount of memory, if we talk about the 
memory used by the process itself. However, there are various meta data 
in the kernel itself that is needed for a process, which will remain in 
memory even if no pages are in memory. Swapping will also move 
non-essential kernel structures out to disk for the process, in addition 
to the pages. Thus, there is a difference between swapping and paging.
The whole process context for example. Which includes both the page 
tables as well as the kernel mode stack for the process, processor 
registers, and possibly also open file contexts, and probably some other 
things I'm forgetting now.
Very little needs to be kept in memory for a process if you are not 
interested in resuming it on short notice.

   Johnny

-- 
Johnny Billquist                  || "I'm on a bus
                                   ||  on a psychedelic trip
email: bqt at softjar.se             ||  Reading murder books
pdp is alive!                     ||  tryin' to stay hip" - B. Idol


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
       [not found] <mailman.137.1524667148.3788.tuhs@minnie.tuhs.org>
  2018-04-25 21:43 ` Johnny Billquist
@ 2018-04-25 22:24 ` Johnny Billquist
  2018-04-26  5:51   ` Tom Ivar Helbekkmo
  2018-04-25 22:37 ` Johnny Billquist
  2 siblings, 1 reply; 105+ messages in thread
From: Johnny Billquist @ 2018-04-25 22:24 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 3211 bytes --]

On 2018-04-25 16:39, Tom Ivar Helbekkmo<tih at hamartun.priv.no> wrote:
> 
> Ron Natalie<ron at ronnatalie.com>  writes:
> 
>> RK05’s were 4872 blocks.  Don’t know why that number has stuck with
>> me, too many invocations of mkfs I guess.  Oddly, DEC software for
>> some reason never used the last 72 blocks.
> I guess that's because they implemented DEC Standard 144, known as
> bad144 in BSD Unix.  It's a system of remapping bad sectors to spares at
> the end of the disk.  I had fun implementing that for the wd (ST506)
> disk driver in NetBSD, once upon a time...

Uh... DEC STD 144 does not have anything to do with remapping bad blocks 
to replacement good blocks. DEC STD 144 describes how a media stores 
known bad blocks on a disk, so that file system initialization can then 
take whatever action needed to that these blocks are not used by 
anything. In RSX (and VMS), they will be included in a file called 
BADBLK.SYS, and thus be perceived as "used".

bad144 in NetBSD will keep the a table in memory for such disks, and 
"skip" blocks listed in the list as bad, meaning all other blocks gets 
shifted. So it sortof do a remapping to good blocks by extending the 
used blocks, and does not allocate anything at the end of the disk per 
se. However, that is a Unix specific solution. OS/8 had a similar 
solution for RL01 and RL02 drives, but not RK05 (as RK05 disks don't 
follow DEC STD 144.)

I don't know exactly why DEC left the last three tracks unused. Might 
have been for diagnostic tools to have a scratch area to play with. 
Might have been that those tracks were found to be less reliable. Or 
maybe something completely different. But it was not for bad block 
replacement, as DEC didn't even do that on RK05 (or more or less in 
general at all before MSCP. MSCP, by the way, does not use DEC STD 144.)

Something Unix and other implementations usually miss with DEC STD 144 
is that there are actually two tables with bad blocks defined by the 
standard. There is the manufacturer defined bad blocks, which all 
software seems to know and deal with, and there are the user defined bad 
blocks, which are supposed to be where all bad blocks that develop after 
manufacture are supposed to be placed. Lots of software does not deal 
with this second list. In addition, you also have the pack serial number 
and some stuff defined by DEC STD 144, which is also recorded on the 
last track, where the bad block lists also are stored.

Note that this information is supposed to be on the last track, meaning 
you cannot use the scheme Unix uses to remap bad blocks, unless you keep 
some blocks before the last track unallocated.

The ultimate irony was when I discovered that bad144 under NetBSD was 
only built for x86 machine, and not being built for VAX, which is the 
only architecture supported which actually have real disks which follow 
the DEC STD 144. But that was corrected about 20 years ago now (time flies).

   Johnny

-- 
Johnny Billquist                  || "I'm on a bus
                                   ||  on a psychedelic trip
email: bqt at softjar.se             ||  Reading murder books
pdp is alive!                     ||  tryin' to stay hip" - B. Idol


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
       [not found] <mailman.137.1524667148.3788.tuhs@minnie.tuhs.org>
@ 2018-04-25 21:43 ` Johnny Billquist
  2018-04-25 22:24 ` Johnny Billquist
  2018-04-25 22:37 ` Johnny Billquist
  2 siblings, 0 replies; 105+ messages in thread
From: Johnny Billquist @ 2018-04-25 21:43 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 2017 bytes --]

On 2018-04-25 16:39, Ronald Natalie<ron at ronnatalie.com> wrote:
> 
>> On Apr 24, 2018, at 9:27 PM, Dan Stromberg<drsalists at gmail.com>  wrote:
>>
>> On Sun, Apr 22, 2018 at 2:51 PM, Dave Horsfall<dave at horsfall.org>  wrote:
>>> Now, how many youngsters know the difference between paging and swapping?
>> I'm a mere 52, but I believe paging is preferred over swapping.
>>
>> Swapping is an entire process at a time.
>>
>> Paging is just a page of memory at a time - like 4K or something thereabout.
> Early pages were 1K.

What machines are we talking about then?
PDP-11 have 8K pages. VAX have 512 byte pages, if we talk about hardware.
(And yes, I know pages on PDP-11s are not fixed in size, but if you want 
the page to go right up to the next page, it's 8K.)

> The fun argument is what is Virtual Memory.    Typically, people align that with paging but you can stretch the definition to cover paging.
> This was a point of contention in the early VAX Unix days as the ATT (System III, even V?) didn’t support paging on the VAX where as BSD did.
> Our comment was that “It ain’t VIRTUAL memory if it isn’t all there” as opposed to virtual addressing.

Weird comment. What does that mean? On a PDP-11, all your virtual memory 
was always there when the process was on the CPU, but it might not be 
there at other times. Just as not all processes memory would be in 
physical memory all the time, since that often would require more 
physical memory than you had.
But you normally did not have demand paging, since that was not really 
gaining you much on a PDP-11. On the other hand, overlays do the same 
thing for you, but in userspace.

So you would claim that ATT Unix did not have virtual memory because it 
didn't do demand paging?

   Johnny

-- 
Johnny Billquist                  || "I'm on a bus
                                   ||  on a psychedelic trip
email: bqt at softjar.se             ||  Reading murder books
pdp is alive!                     ||  tryin' to stay hip" - B. Idol


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-25 21:14                   ` Lawrence Stewart
@ 2018-04-25 21:30                     ` ron minnich
  0 siblings, 0 replies; 105+ messages in thread
From: ron minnich @ 2018-04-25 21:30 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1257 bytes --]

there's an interesting parallel discussion going on over at riscv about
page size. 4k won the day.

fairly shocking in a world of 4 TiB systems but ...

ron

On Wed, Apr 25, 2018 at 2:21 PM Lawrence Stewart <stewart at serissa.com>
wrote:

>
> > On 2018, Apr 25, at 4:45 PM, Larry McVoy <lm at mcvoy.com> wrote:
> >
> >>> Our comment was that ???It ain???t VIRTUAL memory if it isn???t all
> there??? as
> >>> opposed to virtual addressing.
> >>
> >> Gene Amdahl was not a fan of paging.  He said that virtual memory
> >> merely magnifies the need for real memory.
> >
> > Wasn't that a Seymour Cray quote?  Seems like he was also not a fan of
> VM.
>
> My favorite is due to Dave Clark at MIT: “Anytime someone says ‘virtual’
> you should think ‘slow’”
>
> followed closely by “Any problem in computer science can be solved by
> adding a layer of abstraction” and “Any performance problem can be solved
> by removing a layer of abstraction.”
>
> As one converted to the HPC cults, I am naturally opposed to small pages,
> virtual anything, and interrupts…
>
> -L
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180425/71b1f291/attachment.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-25 20:45                 ` Larry McVoy
@ 2018-04-25 21:14                   ` Lawrence Stewart
  2018-04-25 21:30                     ` ron minnich
  0 siblings, 1 reply; 105+ messages in thread
From: Lawrence Stewart @ 2018-04-25 21:14 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 808 bytes --]


> On 2018, Apr 25, at 4:45 PM, Larry McVoy <lm at mcvoy.com> wrote:
> 
>>> Our comment was that ???It ain???t VIRTUAL memory if it isn???t all there??? as
>>> opposed to virtual addressing.
>> 
>> Gene Amdahl was not a fan of paging.  He said that virtual memory
>> merely magnifies the need for real memory.
> 
> Wasn't that a Seymour Cray quote?  Seems like he was also not a fan of VM.

My favorite is due to Dave Clark at MIT: “Anytime someone says ‘virtual’ you should think ‘slow’”

followed closely by “Any problem in computer science can be solved by adding a layer of abstraction” and “Any performance problem can be solved by removing a layer of abstraction.”

As one converted to the HPC cults, I am naturally opposed to small pages, virtual anything, and interrupts…

-L



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-25 20:29               ` Paul Winalski
@ 2018-04-25 20:45                 ` Larry McVoy
  2018-04-25 21:14                   ` Lawrence Stewart
  2018-04-25 23:01                 ` Bakul Shah
  1 sibling, 1 reply; 105+ messages in thread
From: Larry McVoy @ 2018-04-25 20:45 UTC (permalink / raw)


> > Our comment was that ???It ain???t VIRTUAL memory if it isn???t all there??? as
> > opposed to virtual addressing.
> 
> Gene Amdahl was not a fan of paging.  He said that virtual memory
> merely magnifies the need for real memory.

Wasn't that a Seymour Cray quote?  Seems like he was also not a fan of VM.


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-25 12:18             ` Ronald Natalie
  2018-04-25 13:39               ` Tim Bradshaw
  2018-04-25 14:33               ` Ian Zimmerman
@ 2018-04-25 20:29               ` Paul Winalski
  2018-04-25 20:45                 ` Larry McVoy
  2018-04-25 23:01                 ` Bakul Shah
  2 siblings, 2 replies; 105+ messages in thread
From: Paul Winalski @ 2018-04-25 20:29 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1950 bytes --]

On 4/25/18, Ronald Natalie <ron at ronnatalie.com> wrote:
>
> Early pages were 1K.

On the VAX, pages were 512 bytes, the same size as a disk record.

> The fun argument is what is Virtual Memory.    Typically, people align that
> with paging but you can stretch the definition to cover paging.
> This was a point of contention in the early VAX Unix days as the ATT (System
> III, even V?) didn’t support paging on the VAX where as BSD did.

In my book, virtual memory is any addressing scheme where the
addresses that the program uses are different from the physical memory
addresses.  Nowadays most OSes use a scheme where each process has its
own virtual address space, and virtual memory is demand-paged from
backing store on disk.  But there have been other schemes.

Some PDP-11 models had a virtual addressing feature called PLAS
(Program Logical Address Space).  The PDP-11 had 16-bit addressing,
allowing for at most 64K per process.  To take advantage of physical
memory larger than 64K, PLAS allowed multiple 64K virtual address
spaces to be mapped to the larger physical memory.  Sort of the
reverse of the usual virtual addressing scheme, where there is more
virtual memory than physical memory.

IBM Disk Operating System for the System/370 (DOS/VS) had a single
virtual address space that could be bigger than physical memory and
was demand-paged.  This address space could be divided into up to five
partitions, each of which could run a program.  Each partition thus
represents what we now call a process.  IBM OS/VS1 and OS/VS2 SVS also
had a single virtual address space shared by all tasks (processes).
OS/VS2 MVS had the modern concept of a separate virtual address space
for each process.

> Our comment was that “It ain’t VIRTUAL memory if it isn’t all there” as
> opposed to virtual addressing.

Gene Amdahl was not a fan of paging.  He said that virtual memory
merely magnifies the need for real memory.

-Paul W.


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-25 14:46                 ` Larry McVoy
@ 2018-04-25 15:03                   ` ron minnich
  0 siblings, 0 replies; 105+ messages in thread
From: ron minnich @ 2018-04-25 15:03 UTC (permalink / raw)


some of you doubtless remember the term 'expansion swap'.

I was joking that at some companies, you seem to be moved just so you can
end up in places where you have less room. I called this a 'compression
swap'. Nobody got the joke. Oh well.

ron
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180425/903cbb0d/attachment.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-25 14:02                 ` arnold
@ 2018-04-25 14:59                   ` tfb
  0 siblings, 0 replies; 105+ messages in thread
From: tfb @ 2018-04-25 14:59 UTC (permalink / raw)


On 25 Apr 2018, at 15:02, arnold at skeeve.com wrote:
> 
> Why not? If there's enough backing store availble?

Well, I think because the situations where you are using huge pages and the situations where you're paging to backing store should really be mutually exclusive: if your system is paging to backing store you have problems which huge pages are not going to solve.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180425/cc0ae82a/attachment.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-25 14:33               ` Ian Zimmerman
@ 2018-04-25 14:46                 ` Larry McVoy
  2018-04-25 15:03                   ` ron minnich
  0 siblings, 1 reply; 105+ messages in thread
From: Larry McVoy @ 2018-04-25 14:46 UTC (permalink / raw)


So is this also /dev/drums?

https://www.youtube.com/watch?v=_pe6r06EIz0&feature=youtu.be
-- 
---
Larry McVoy            	     lm at mcvoy.com             http://www.mcvoy.com/lm 


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-25 14:02                 ` Tom Ivar Helbekkmo
@ 2018-04-25 14:38                   ` Clem Cole
  0 siblings, 0 replies; 105+ messages in thread
From: Clem Cole @ 2018-04-25 14:38 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 3276 bytes --]

On Wed, Apr 25, 2018 at 10:02 AM, Tom Ivar Helbekkmo via TUHS <
tuhs at minnie.tuhs.org> wrote:

> Ron Natalie <ron at ronnatalie.com> writes:
>
> > RK05’s were 4872 blocks.  Don’t know why that number has stuck with
> > me, too many invocations of mkfs I guess.  Oddly, DEC software for
> > some reason never used the last 72 blocks.
>
> I guess that's because they implemented DEC Standard 144, known as
> bad144 in BSD Unix.  It's a system of remapping bad sectors to spares at
> the end of the disk.  I had fun implementing that for the wd (ST506)
> disk driver in NetBSD, once upon a time...

​Right - back in the day, certainly for RK05's we would purchase 'perfect'
media, because the original Unix implementations did not handled bad
blocks.   You had to be careful, but you could purchase perfect RP06 disk
packs from one or two vendors.  That got harder to do as the disks got
larger *i.e*. the RMxx series.

A disk pack pack typically shipped with either a physical piece of paper or
a sticker attached to them with the HD/CLY bit offset and/or if
pre-formatted HD/CLY/SEC of an bad spots (int he old days disks used
different formats depending on the controller and different sector sizes,
meta data etc, so the bad spots were listed a bit position per ring).

A typical thing that a number of us did if we did not have a perfect pack
was, after we did the Unix mkfs, manually create a file in either root, or
lost+found called bad_blocks, and then using fsdb or the like, assign the
blocks that file.

The problem was the concept of 'grown' bad spots.   I'll stay away from the
argument if they were possible or not (long story), but if in practice a
pack ended up with a block that was causing issues after it was deployed
(which did happen in practice).   You needed to assign the bad block to the
bad_block file manual.

BAD144 was a scheme DEC had to reserve a few blocks and then map out any
bad sectors.   The problem of course, was that it screwed up all the
prediction SW in the OS, as system would ask for block/cyl/sec - X/Y/Z and
it could force a head seek to the of the drive to get one of those reserved
blocks in the end of the disk.   The advantage of the DEC scheme was the
during formatting the known bad sectors would become 'invisible' and of
course if the pack 'grew' and error the driver could add it to the list and
replace the block on the fly (assuming there were available blocks in the
reserved table).

An interesting story on BAD144 - a good example of the original ideas of
Open Source.  It was actually the DEC 'Telephone Industries Group" in
Merrimack, NH (*a.k.a.* TIG) that had Armando, Fred Canter *et. al.* that
wrote the original support for DEC Standard 144; to help support the AT&T
customers, pre Ultrix (remember for a long time AT&T was DEC'S largest
customer).  Once written, that code was also given to CRSG and released
into the wild via the UCB stream (and of course would become part of Ultrix
when DEC finally 'supported' UNIX officially).

I've long ago forgotten, but aps might remember, I think is Fred that did
the original work.
ᐧ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180425/75b40b16/attachment-0001.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-25 12:18             ` Ronald Natalie
  2018-04-25 13:39               ` Tim Bradshaw
@ 2018-04-25 14:33               ` Ian Zimmerman
  2018-04-25 14:46                 ` Larry McVoy
  2018-04-25 20:29               ` Paul Winalski
  2 siblings, 1 reply; 105+ messages in thread
From: Ian Zimmerman @ 2018-04-25 14:33 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 935 bytes --]

On 2018-04-25 08:18, Ronald Natalie wrote:

> The fun argument is what is Virtual Memory.  Typically, people align
> that with paging but you can stretch the definition to cover paging.
> This was a point of contention in the early VAX Unix days as the ATT
> (System III, even V?) didn’t support paging on the VAX where as BSD
> did.  Our comment was that “It ain’t VIRTUAL memory if it isn’t all
> there” as opposed to virtual addressing.

What about overlays?  Virtual or not?

The main difference may be where the code lives which brings non-present
address space back into directly addressable state: in the kernel (and
where: in an interrupt handler or higher up) or in userspace.

-- 
Please don't Cc: me privately on mailing lists and Usenet,
if you also post the followup to the list or newsgroup.
To reply privately _only_ on Usenet and on broken lists
which rewrite From, fetch the TXT record for no-use.mooo.com.


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-25 13:39               ` Tim Bradshaw
@ 2018-04-25 14:02                 ` arnold
  2018-04-25 14:59                   ` tfb
  0 siblings, 1 reply; 105+ messages in thread
From: arnold @ 2018-04-25 14:02 UTC (permalink / raw)



> On 25 Apr 2018, at 13:18, Ronald Natalie <ron at ronnatalie.com> wrote:
> > 
> > Early pages were 1K.

Tim Bradshaw <tfb at tfeb.org> wrote:

> Do systems with huge pages page in the move-them-to-disk sense I wonder?
> I assume they don't in practice because it would be insane but I wonder
> if the VM system is in theory even willing to try.

Why not? If there's enough backing store availble?

Note that many systems demand page-in the code section straight out of the
executable, so if some of those pages aren't needed, they can just
be released.  And said pages can be shared among all processes running
the same executable, for further savings.

> Something I never completely understood in the paging vs swapping
> thing was that I think that systems which could page (well, 4.xBSD in
> particular) would *also* swap if pushed.  I think the reason for that was
> that, if you were really short of memory, swapping freed up the process
> structure and also the page tables &c for the process, which would still
> be needed even if all its pages had been evicted.  Is that right?

It depends upon the system. Some had pageable page tables, which is
pretty hairy. Others didn't. I don't remember what 4BSD did on the
Vax, but I suspect that the page tables and enough info to find everything
on swap stayed in kernel memory.  (Where's Chris Torek when you need
him? :-)

But yes, swapping was generally used to free up large amounts of memory
if under heavy load.

HTH,

Arnold


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-23 18:30               ` Ron Natalie
@ 2018-04-25 14:02                 ` Tom Ivar Helbekkmo
  2018-04-25 14:38                   ` Clem Cole
  0 siblings, 1 reply; 105+ messages in thread
From: Tom Ivar Helbekkmo @ 2018-04-25 14:02 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 903 bytes --]

Ron Natalie <ron at ronnatalie.com> writes:

> RK05’s were 4872 blocks.  Don’t know why that number has stuck with
> me, too many invocations of mkfs I guess.  Oddly, DEC software for
> some reason never used the last 72 blocks.

I guess that's because they implemented DEC Standard 144, known as
bad144 in BSD Unix.  It's a system of remapping bad sectors to spares at
the end of the disk.  I had fun implementing that for the wd (ST506)
disk driver in NetBSD, once upon a time...

-tih
-- 
Most people who graduate with CS degrees don't understand the significance
of Lisp.  Lisp is the most important idea in computer science.  --Alan Kay
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 487 bytes
Desc: not available
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180425/03203dcd/attachment.sig>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-25 12:18             ` Ronald Natalie
@ 2018-04-25 13:39               ` Tim Bradshaw
  2018-04-25 14:02                 ` arnold
  2018-04-25 14:33               ` Ian Zimmerman
  2018-04-25 20:29               ` Paul Winalski
  2 siblings, 1 reply; 105+ messages in thread
From: Tim Bradshaw @ 2018-04-25 13:39 UTC (permalink / raw)


On 25 Apr 2018, at 13:18, Ronald Natalie <ron at ronnatalie.com> wrote:
> 
> Early pages were 1K.

Do systems with huge pages page in the move-them-to-disk sense I wonder?  I assume they don't in practice because it would be insane but I wonder if the VM system is in theory even willing to try.

Something I never completely understood in the paging vs swapping thing was that I think that systems which could page (well, 4.xBSD in particular) would *also* swap if pushed.  I think the reason for that was that, if you were really short of memory, swapping freed up the process structure and also the page tables &c for the process, which would still be needed even if all its pages had been evicted.  Is that right?



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180425/f8f9875a/attachment.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-25  1:27           ` Dan Stromberg
@ 2018-04-25 12:18             ` Ronald Natalie
  2018-04-25 13:39               ` Tim Bradshaw
                                 ` (2 more replies)
  0 siblings, 3 replies; 105+ messages in thread
From: Ronald Natalie @ 2018-04-25 12:18 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 856 bytes --]



> On Apr 24, 2018, at 9:27 PM, Dan Stromberg <drsalists at gmail.com> wrote:
> 
> On Sun, Apr 22, 2018 at 2:51 PM, Dave Horsfall <dave at horsfall.org> wrote:
>> Now, how many youngsters know the difference between paging and swapping?
> 
> I'm a mere 52, but I believe paging is preferred over swapping.
> 
> Swapping is an entire process at a time.
> 
> Paging is just a page of memory at a time - like 4K or something thereabout.

Early pages were 1K.

The fun argument is what is Virtual Memory.    Typically, people align that with paging but you can stretch the definition to cover paging.
This was a point of contention in the early VAX Unix days as the ATT (System III, even V?) didn’t support paging on the VAX where as BSD did.
Our comment was that “It ain’t VIRTUAL memory if it isn’t all there” as opposed to virtual addressing.




^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-22 21:51         ` Dave Horsfall
@ 2018-04-25  1:27           ` Dan Stromberg
  2018-04-25 12:18             ` Ronald Natalie
  0 siblings, 1 reply; 105+ messages in thread
From: Dan Stromberg @ 2018-04-25  1:27 UTC (permalink / raw)


On Sun, Apr 22, 2018 at 2:51 PM, Dave Horsfall <dave at horsfall.org> wrote:
> Now, how many youngsters know the difference between paging and swapping?

I'm a mere 52, but I believe paging is preferred over swapping.

Swapping is an entire process at a time.

Paging is just a page of memory at a time - like 4K or something thereabout.

BTW: I love Linux :) at least until something better comes along.


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-24  6:22         ` Bakul Shah
@ 2018-04-24 14:57           ` Warner Losh
  0 siblings, 0 replies; 105+ messages in thread
From: Warner Losh @ 2018-04-24 14:57 UTC (permalink / raw)


On Tue, Apr 24, 2018 at 12:22 AM, Bakul Shah <bakul at bitblocks.com> wrote:

> On Mon, 23 Apr 2018 22:59:19 -0600 Warner Losh <imp at bsdimp.com> wrote:
> > >         ...
> > >         Not_Zoned       # Zone Mode <<== this seems wrong.
> > >
> >
> > That's right. This is for BIO_ZONE stuff, which has to do with host
> managed
> > and host aware SMR drive zones. That's different than the zones you are
> > talking about.
>
> Ah. Thanks! Does host management of SMR zones provide better
> throughput for sequential writes? Enough to make it worht it?
> [I guess this may be something you guys may care about?]
> Haven't had a chance to work on storage stuff for ages.  [Last
> I played with Ceph was 5 years ago and at a higher level than
> disks.]


Right now, I don't think that we do anything in stock FreeBSD with the
zones. I've looked at trying to create some kind of FS that copes with the
large granularity writes blocks that host managed SMR drives would need to
do and changes we'd need to a write-in-place FS to take advantage of it.
It's possible, but it would turn UFS from a write-in-place system to a
write-in-place, but only for meta-data, and different free block allocation
methods. So far, it hasn't been enough of a win to be worth bothering with
for our application (eg, we can get 10-20% more storage, but that delta is
likely to remain constant, and the effort to make it happen is high enough
that the savings isn't there to pay for the development).

> Yes. This matches our experience where we get 1.5x better on the low LBAs
> > than the high LBAs. We're looking to 'short stroke' the drive to the
> first
> > part of it to get better performance... Toss a filesystem on top of it,
> and
> > have a more random workload and it's down to about 30% better than using
> > the whole drive....
>
> Is the tradeoff worth it? Now you have choices like Sata vs
> SAS vs SDD vs PCIe....
>

We have a multi-tiered storage architecture. When you want to play a video
from our service, we see if any of the close, fast boxes has a copy we can
use. If they are too busy, we go back to slower, but more complete tiers.
The last tier is made up of machines with lots of spinning disks. Some
catalogs are small enough that only using 1/2 the drive, but getting 30%
better throughput is the right engineering decision since it would improve
network utilization w/o needing to deploy more servers. We use all those
technologies in the different tiers: our fastest 100G boxes are NVMe, the
40G boxes we have are JBOD of SSDs, the 10G storage boxes are spinning
rust. We use SATA for SSDs since the SAS SSDs are super pricy, but we use
SAS HDDs since we need the deeper queues and other features of SAS that are
absent from SATA. We also sometimes oversubscribe PCIe lanes to get better
storage density at a cheaper price point.

There's lots of tradeoffs that can be made....

Warner
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180424/8050d0d9/attachment.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-24  9:37                       ` Michael Kjörling
  2018-04-24  9:57                         ` Dave Horsfall
@ 2018-04-24 13:03                         ` Arthur Krewat
  1 sibling, 0 replies; 105+ messages in thread
From: Arthur Krewat @ 2018-04-24 13:03 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 877 bytes --]

On 4/24/2018 5:37 AM, Michael Kjörling wrote:
>
> I couldn't quite resist, so tried it out. Take this for what it is, an
> anecdote.
>
To make it even worse in terms of trying to figure out what a disk will 
do for (to) you in terms of performance...

A new caching scheme on a set of 10TB drives I bought a while ago - they 
are HGST SAS drives.

They incorporated a set of cylinders spaced across the disk that are 
"cache" cylinders. Wherever the heads are "right now", the closest cache 
cylinders will be used to write to. Once there's enough free time, it 
will go back and read those cache cylinders, and put the blocks where 
they are "supposed" to be.

http://www.tomsitpro.com/articles/hgst-ultrastar-c15k600-hdd,2-906-3.html

Those are not the drives I bought, but as far as I know, even their 
near-line SAS products have this feature, called NVC Quick Cache

ak


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-24  9:57                         ` Dave Horsfall
@ 2018-04-24 13:01                           ` Nemo
  0 siblings, 0 replies; 105+ messages in thread
From: Nemo @ 2018-04-24 13:01 UTC (permalink / raw)


On 24 April 2018 at 05:57, Dave Horsfall <dave at horsfall.org> wrote:
[...]
> A well-known question in the security field, for instance, is how do you
> know that you have *really* erased that disk?  The thing can lie as much as
> it likes, and you'll never know for sure.

That reminds of an old mil-spec that told you how to dispose of
disc-packs by grinding down the platters by hand, what grit of paper
to use, which directions, and how often.  (Exact number escapes me.)

N.


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-24  9:37                       ` Michael Kjörling
@ 2018-04-24  9:57                         ` Dave Horsfall
  2018-04-24 13:01                           ` Nemo
  2018-04-24 13:03                         ` Arthur Krewat
  1 sibling, 1 reply; 105+ messages in thread
From: Dave Horsfall @ 2018-04-24  9:57 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 703 bytes --]

On Tue, 24 Apr 2018, Michael Kjörling wrote:

[ ... ]

> That's definitely statistically significantly slower toward the outer 
> edge of the disk as presented by the OS. That _should_ translate to 
> slower for higher LBAs, but with all the magic happening in modern 
> systems, you might never know...

Exactly; with disks these days it's pointless trying to second-guess 
what's happening under the bonnet (ObUS: hood).

A well-known question in the security field, for instance, is how do you 
know that you have *really* erased that disk?  The thing can lie as much 
as it likes, and you'll never know for sure.

-- 
Dave Horsfall DTM (VK2KFU)  "Those who don't understand security will suffer."


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-23 23:30                     ` Grant Taylor
@ 2018-04-24  9:37                       ` Michael Kjörling
  2018-04-24  9:57                         ` Dave Horsfall
  2018-04-24 13:03                         ` Arthur Krewat
  0 siblings, 2 replies; 105+ messages in thread
From: Michael Kjörling @ 2018-04-24  9:37 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1964 bytes --]

On 23 Apr 2018 17:30 -0600, from tuhs at minnie.tuhs.org (Grant Taylor via TUHS):
> On 04/23/2018 04:15 PM, Warner Losh wrote:
>> It's weird. These days lower LBAs perform better on spinning
>> drives. We're seeing about 1.5x better performance on the first
>> 30% of a drive than on the last 30%, at least for read speeds for
>> video streaming....
> 
> I think manufacturers have switched things around on us.  I'm used
> to higher LBA numbers being on the outside of the disk.  But I've
> seen anecdotal indicators that the opposite is now true.

I couldn't quite resist, so tried it out. Take this for what it is, an
anecdote.

Reading 10 GB in direct mode using dd with no skip at the beginning,
in 1 MiB blocks, gives me about 190 MB/s on one of the Seagate SAS
disks in my PC, and some 165 MB/s on one of the HGST SATA disks in the
same PC. Obviously, that's for purely sequential I/O, with very little
other I/O load.

Doing the same with an initial skip of 3,500,000 blocks (these are 4
TB drives, so this puts the read toward the outer limit), I get 105
MB/s on the Seagate SAS and 100 MB/s on the HGST SATA.

I did the same thing twice to make sure caching wasn't somehow
interfering with the values. The differences for all reported transfer
rates were marginal, and well within a reasonable margin of error.

That's definitely statistically significantly slower toward the outer
edge of the disk as presented by the OS. That _should_ translate to
slower for higher LBAs, but with all the magic happening in modern
systems, you might never know...

Of course, back in ye olden days, even 100 MB/s would have been
blazingly fast. Are we spoiled these days to think of throughputs on
the order of a gigabit per second as slow?

-- 
Michael Kjörling • https://michael.kjorling.se • michael at kjorling.se
  “The most dangerous thought that you can have as a creative person
              is to think you know what you’re doing.” (Bret Victor)


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-23 23:45                   ` Arthur Krewat
@ 2018-04-24  8:05                     ` tfb
  0 siblings, 0 replies; 105+ messages in thread
From: tfb @ 2018-04-24  8:05 UTC (permalink / raw)


On 24 Apr 2018, at 00:45, Arthur Krewat <krewat at kilonet.net> wrote:
> 
> Wow, I was going to refute this, as I don't ever remember seeing any Solaris box do this. However, a Solaris 8 x86 vanilla install on VMware I did recently does indeed have swap on the first cylinders (see below). A Solaris 7 x86 install does not exhibit this behavior. Now I'm wondering if Solaris 9 does it. A Solaris 10 box I have access to has swap after root, not before.


It makes sense that it was 8.  We hardly used 9 so I don't know what it did, but I suspect by 10 someone had had very strong words with whoever thought it was a good idea after some large customer had had an only-technically-their-fault outage because of it and been fierce at Sun.  It was really stupidly dumb, I thought: solving a problem no-one had any more by the wrong method (if paging *was* a problem a better solution is to add a second slice on another physical disk as swap).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180424/a106c464/attachment.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
@ 2018-04-24  7:10 Rudi Blom
  0 siblings, 0 replies; 105+ messages in thread
From: Rudi Blom @ 2018-04-24  7:10 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 2045 bytes --]

>Date: Mon, 23 Apr 2018 13:51:07 -0400
>From: Clem Cole <clemc at ccc.com>
>To: Ron Natalie <ron at ronnatalie.com>
>Cc: Tim Bradshaw <tfb at tfeb.org>, TUHS main list <tuhs at minnie.tuhs.org>
>Subject: Re: [TUHS] /dev/drum
>Message-ID:
>        <CAC20D2PEzAayjfaQN+->kQS=H7npcEZ_OKXL1ffPxak5b2ENv4Q at mail.gmail.com>
>Content-Type: text/plain; charset="utf-8"

... some stuff removed ...

>​Exactly...   For instance an RK04 was less that 5K blocks (4620 or some
>such - I've forgotten the actually amount).  The disk was mkfs'ed to the
>first 4K and the left over was give to the swap system.   By the time of
>4.X, the RP06 was 'partitioned' into 'rings' (some overlapping).  The 'a'.
>partition was root, the 'b' was swap and one fo the others was the rest.
>Later the 'c' was a short form for copying the entire disk.

Wondered why, but I guess now I know that's the reason Digital UNIX on
alpha used the same disk layout. From a AlphaServer DS10 running
DU4.0g, output "disklabel -r rz16a"

# /dev/rrz16a:
type: SCSI
disk: BB009222
label:
flags:
bytes/sector: 512
sectors/track: 168
tracks/cylinder: 20
sectors/cylinder: 3360
cylinders: 5273
sectors/unit: 17773524
rpm: 7200
interleave: 1
trackskew: 66
cylinderskew: 83
headswitch: 0		# milliseconds
track-to-track seek: 0	# milliseconds
drivedata: 0

8 partitions:
#          size     offset    fstype   [fsize bsize   cpg]      #
NOTE: values not exact
  a:     524288          0     AdvFS                    	# (Cyl.    0 - 156*)
  b:    1572864     524288      swap                    	# (Cyl.  156*- 624*)
  c:   17773524          0    unused        0     0       	# (Cyl.    0 - 5289*)
  d:          0          0    unused        0     0       	# (Cyl.    0 - -1)
  e:          0          0    unused        0     0       	# (Cyl.    0 - -1)
  f:          0          0    unused        0     0       	# (Cyl.    0 - -1)
  g:    4194304    2097152     AdvFS                    	# (Cyl.  624*- 1872*)
  h:   11482068    6291456     AdvFS                    	# (Cyl. 1872*- 5289*)


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-23 23:44 ` Johnny Billquist
                     ` (4 preceding siblings ...)
  2018-04-24  4:32   ` Grant Taylor
@ 2018-04-24  6:46   ` Lars Brinkhoff
  5 siblings, 0 replies; 105+ messages in thread
From: Lars Brinkhoff @ 2018-04-24  6:46 UTC (permalink / raw)


Johnny Billquist wrote:
> Which is also why the file system for RSX (ODS-1) placed the index
> file (equivalent of the inode table) at the middle of the disk by
> default.  Not sure if Unix did that optimization, but I would hope so.

I know of an operating system predating Unix which has that
optimization.


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-24  4:59       ` Warner Losh
@ 2018-04-24  6:22         ` Bakul Shah
  2018-04-24 14:57           ` Warner Losh
  0 siblings, 1 reply; 105+ messages in thread
From: Bakul Shah @ 2018-04-24  6:22 UTC (permalink / raw)


On Mon, 23 Apr 2018 22:59:19 -0600 Warner Losh <imp at bsdimp.com> wrote:
> >         ...
> >         Not_Zoned       # Zone Mode <<== this seems wrong.
> >
> 
> That's right. This is for BIO_ZONE stuff, which has to do with host managed
> and host aware SMR drive zones. That's different than the zones you are
> talking about.

Ah. Thanks! Does host management of SMR zones provide better
throughput for sequential writes? Enough to make it worht it?
[I guess this may be something you guys may care about?]
Haven't had a chance to work on storage stuff for ages.  [Last
I played with Ceph was 5 years ago and at a higher level than
disks.]

> Yes. This matches our experience where we get 1.5x better on the low LBAs
> than the high LBAs. We're looking to 'short stroke' the drive to the first
> part of it to get better performance... Toss a filesystem on top of it, and
> have a more random workload and it's down to about 30% better than using
> the whole drive....

Is the tradeoff worth it? Now you have choices like Sata vs
SAS vs SDD vs PCIe....

We've come a long way from /dev/drum :-)


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-24  4:49     ` Bakul Shah
@ 2018-04-24  4:59       ` Warner Losh
  2018-04-24  6:22         ` Bakul Shah
  0 siblings, 1 reply; 105+ messages in thread
From: Warner Losh @ 2018-04-24  4:59 UTC (permalink / raw)


On Mon, Apr 23, 2018 at 10:49 PM, Bakul Shah <bakul at bitblocks.com> wrote:

> On Mon, 23 Apr 2018 22:32:26 -0600 Grant Taylor via TUHS <
> tuhs at minnie.tuhs.org> wrote:
> >
> > I had always assumed that the outer edge (what I thought was the end of
> > the disk) was faster than the inner edge (what I thought was the
> > beginning of the disk) because of geometry.  However, as Ronald stated,
> > hard drives were constant angular density.  Thus negating what I
> > originally thought about speed.
>
> Constant angular velocity means faster "linear" velocity for
> tracks further away from the center.  Since 1990 or so disk
> tracks are divided up in 16 or so "zones", where outer zones
> have more blocks per track.  This translates to higher
> throughput.
>
> A modern Seagate Exos SAS disk may have a range of 279MB/s
> (outermost) to 136MB/s (innermost) or 300MB/s to 210MB/s for
> faster disks (15Krpm).  Disk vendors don't seem to break this
> range out for consumer drives. But you can measure it using
> tools like diskinfo on FreeBSD. For example:
>
> # diskinfo -t /dev/ada4 # this is an 5 year old 1TB WD "Black" disk.
> /dev/ada4
>         ...
>         Not_Zoned       # Zone Mode <<== this seems wrong.
>

That's right. This is for BIO_ZONE stuff, which has to do with host managed
and host aware SMR drive zones. That's different than the zones you are
talking about.


>         ...
> Transfer rates:
>         outside:       102400 kbytes in   0.972176 sec =   105331
> kbytes/sec
>         middle:        102400 kbytes in   1.088977 sec =    94033
> kbytes/sec
>         inside:        102400 kbytes in   1.804460 sec =    56748
> kbytes/sec
>

Yes. This matches our experience where we get 1.5x better on the low LBAs
than the high LBAs. We're looking to 'short stroke' the drive to the first
part of it to get better performance... Toss a filesystem on top of it, and
have a more random workload and it's down to about 30% better than using
the whole drive....

Warner
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180423/5e564894/attachment-0001.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-24  4:32   ` Grant Taylor
@ 2018-04-24  4:49     ` Bakul Shah
  2018-04-24  4:59       ` Warner Losh
  0 siblings, 1 reply; 105+ messages in thread
From: Bakul Shah @ 2018-04-24  4:49 UTC (permalink / raw)


On Mon, 23 Apr 2018 22:32:26 -0600 Grant Taylor via TUHS <tuhs at minnie.tuhs.org> wrote:
> 
> I had always assumed that the outer edge (what I thought was the end of
> the disk) was faster than the inner edge (what I thought was the
> beginning of the disk) because of geometry.  However, as Ronald stated,
> hard drives were constant angular density.  Thus negating what I
> originally thought about speed.

Constant angular velocity means faster "linear" velocity for
tracks further away from the center.  Since 1990 or so disk
tracks are divided up in 16 or so "zones", where outer zones
have more blocks per track.  This translates to higher
throughput.

A modern Seagate Exos SAS disk may have a range of 279MB/s
(outermost) to 136MB/s (innermost) or 300MB/s to 210MB/s for
faster disks (15Krpm).  Disk vendors don't seem to break this
range out for consumer drives. But you can measure it using
tools like diskinfo on FreeBSD. For example:

# diskinfo -t /dev/ada4 # this is an 5 year old 1TB WD "Black" disk.
/dev/ada4
	...
        Not_Zoned       # Zone Mode <<== this seems wrong.
        ...
Transfer rates:
        outside:       102400 kbytes in   0.972176 sec =   105331 kbytes/sec
        middle:        102400 kbytes in   1.088977 sec =    94033 kbytes/sec
        inside:        102400 kbytes in   1.804460 sec =    56748 kbytes/sec


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-23 23:44 ` Johnny Billquist
                     ` (3 preceding siblings ...)
  2018-04-24  1:02   ` Lyndon Nerenberg
@ 2018-04-24  4:32   ` Grant Taylor
  2018-04-24  4:49     ` Bakul Shah
  2018-04-24  6:46   ` Lars Brinkhoff
  5 siblings, 1 reply; 105+ messages in thread
From: Grant Taylor @ 2018-04-24  4:32 UTC (permalink / raw)


On 04/23/2018 05:44 PM, Johnny Billquist wrote:
> But this whole optimization for swap based on transfer speeds makes no 
> sense to me. The dominating factor in spinning rust is seek times, and 
> not transfer speed. If you place the swap at one end of the disk, it 
> won't matter much that transfers will be faster, as seek times will on 
> average be much longer, and that will eat up any transfer gain ten times 
> over before even thinking. (Unless all your disk ever does is swapping, 
> at which time the heads can stay around the swapping area all the time.)

I wonder if part of the (perceived?) performance gain was from the 
likelihood that swap at one end of the drive meant that things could be 
contiguous.  Seek, lay down / pick up a large (or at least not small) 
number of sectors, and seek back.

I had always assumed that the outer edge (what I thought was the end of 
the disk) was faster than the inner edge (what I thought was the 
beginning of the disk) because of geometry.  However, as Ronald stated, 
hard drives were constant angular density.  Thus negating what I 
originally thought about speed.



-- 
Grant. . . .
unix || die

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3982 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180423/da65f2ca/attachment.bin>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-23 23:44 ` Johnny Billquist
                     ` (2 preceding siblings ...)
  2018-04-24  0:25   ` Warren Toomey
@ 2018-04-24  1:02   ` Lyndon Nerenberg
  2018-04-24  4:32   ` Grant Taylor
  2018-04-24  6:46   ` Lars Brinkhoff
  5 siblings, 0 replies; 105+ messages in thread
From: Lyndon Nerenberg @ 2018-04-24  1:02 UTC (permalink / raw)


> But this whole optimization for swap based on transfer speeds makes no sense 
> to me. The dominating factor in spinning rust is seek times, and not transfer 
> speed.

And thus were born cylinder groups.


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-24  0:25   ` Warren Toomey
@ 2018-04-24  0:31     ` Dave Horsfall
  0 siblings, 0 replies; 105+ messages in thread
From: Dave Horsfall @ 2018-04-24  0:31 UTC (permalink / raw)


On Tue, 24 Apr 2018, Warren Toomey wrote:

>> Not sure if Unix did that optimization, but I would hope so. (Never dug 
>> into that part of the code.)
>
> Boston Children's Museum RK05 driver for 6th Ed springs to mind!

Yep, with the inodes in the middle of the disk!  It also helped that we 
(UNSW) put the superblock in the middle of said slice...

-- 
Dave Horsfall BSc DTM (VK2KFU) -- FuglySoft -- Gosford IT -- Unix/C/Perl (AbW)


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-23 23:01 ` Dave Horsfall
  2018-04-23 23:49   ` Dave Horsfall
@ 2018-04-24  0:26   ` Ronald Natalie
  1 sibling, 0 replies; 105+ messages in thread
From: Ronald Natalie @ 2018-04-24  0:26 UTC (permalink / raw)


We never had to worry about DEC CEs.    However, when we got to the VAX where we had a built in time of day clock, anytime anybody ran diagnostics or any DEC OS where they set the clock, it would get changed to localtime (what DEC used) rather than the Zulu time UNIX used.



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-23 23:44 ` Johnny Billquist
  2018-04-23 23:57   ` Steve Nickolas
  2018-04-24  0:24   ` Ronald Natalie
@ 2018-04-24  0:25   ` Warren Toomey
  2018-04-24  0:31     ` Dave Horsfall
  2018-04-24  1:02   ` Lyndon Nerenberg
                     ` (2 subsequent siblings)
  5 siblings, 1 reply; 105+ messages in thread
From: Warren Toomey @ 2018-04-24  0:25 UTC (permalink / raw)


On Tue, Apr 24, 2018 at 01:44:06AM +0200, Johnny Billquist wrote:
>Which is also why the file system for RSX (ODS-1) placed the index 
>file (equivalent of the inode table) at the middle of the disk by 
>default.
>
>Not sure if Unix did that optimization, but I would hope so. (Never 
>dug into that part of the code.)

Boston Children's Museum RK05 driver for 6th Ed springs to mind!

Cheers, Warren


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-23 23:44 ` Johnny Billquist
  2018-04-23 23:57   ` Steve Nickolas
@ 2018-04-24  0:24   ` Ronald Natalie
  2018-04-24  0:25   ` Warren Toomey
                     ` (3 subsequent siblings)
  5 siblings, 0 replies; 105+ messages in thread
From: Ronald Natalie @ 2018-04-24  0:24 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 716 bytes --]

It also makes no sense because disks of the day were constant angular density (unlike CDs for example, which are constant linear).    There’s no different in transfer rate anywhere on the disk.   Each track has the same number of sectors.
We spent a lot of time in those days with “elevator” algorithms and clustering inodes to try to minimize seek time.   The other thing that was done on the fancier devices (not often found on PDP-11s) was optimizing where you were in the rotational angle.    

The original UNIX filesystems were dumb.    It was <boot block><superblock><inodes><datablocks>.     Wasn’t until later things like the Berkeley file systems that things started to get more clever in layout.


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-23 23:44 ` Johnny Billquist
@ 2018-04-23 23:57   ` Steve Nickolas
  2018-04-24  0:24   ` Ronald Natalie
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 105+ messages in thread
From: Steve Nickolas @ 2018-04-23 23:57 UTC (permalink / raw)


On Tue, 24 Apr 2018, Johnny Billquist wrote:

> Which is also why the file system for RSX (ODS-1) placed the index file 
> (equivalent of the inode table) at the middle of the disk by default.
>
> Not sure if Unix did that optimization, but I would hope so. (Never dug into 
> that part of the code.)

That reminds me of the Apple ][, whose original DOS put the directory and 
bitmap table on track 17 (0-indexed) on a 35-track floppy disk.

-uso.


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-23 23:01 ` Dave Horsfall
@ 2018-04-23 23:49   ` Dave Horsfall
  2018-04-24  0:26   ` Ronald Natalie
  1 sibling, 0 replies; 105+ messages in thread
From: Dave Horsfall @ 2018-04-23 23:49 UTC (permalink / raw)


On Tue, 24 Apr 2018, Dave Horsfall wrote:

> Well, this would never do, of course, as it required us to take down our 
> Unix box (one of the first in Oz) to run RSX, so my then-boss wrote a 
> program (in FORTRAN) so we could do it under Unix; said LV-11 actually 
> spent most of its time plotting biorhythm charts with a program written 
> by said boss (who also gave me my first taste of grass, and I hated it).

One minor detail that I forgot: it was the (mostly male) computer
operators who asked us to plot both their biorhythms of their (mostly
female) partners, and themselves, to see if they were compatible...

I think we (not me) began charging their department per plot, as that
thermal paper was pretty pricey...  Hey, it was all funny money, after 
all!

This was the 70s/80s, man...

-- 
Dave Horsfall DTM (VK2KFU)  "Those who don't understand security will suffer."


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-23 22:07                 ` Tim Bradshaw
  2018-04-23 22:15                   ` Warner Losh
@ 2018-04-23 23:45                   ` Arthur Krewat
  2018-04-24  8:05                     ` tfb
  1 sibling, 1 reply; 105+ messages in thread
From: Arthur Krewat @ 2018-04-23 23:45 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 3997 bytes --]

On 4/23/2018 6:07 PM, Tim Bradshaw wrote:
> On 23 Apr 2018, at 21:47, Grant
> And the cretinism was that this was mid 2000s:  if you had machine that was paging the answer was to buy more memory not to arrange for faster swap space: it was solving a problem that nobody had any more.
>
>
Wow, I was going to refute this, as I don't ever remember seeing any 
Solaris box do this. However, a Solaris 8 x86 vanilla install on VMware 
I did recently does indeed have swap on the first cylinders (see below). 
A Solaris 7 x86 install does not exhibit this behavior. Now I'm 
wondering if Solaris 9 does it. A Solaris 10 box I have access to has 
swap after root, not before.

# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
        0. c0d0 <DEFAULT cyl 22166 alt 2 hd 15 sec 63>
           /pci at 0,0/pci-ide at 7,1/ide at 0/cmdk at 0,0
Specify disk (enter its number): 0
selecting c0d0
Controller working list found
[disk formatted, defect list found]
Warning: Current Disk has mounted partitions.


FORMAT MENU:
         disk       - select a disk
         type       - select (define) a disk type
         partition  - select (define) a partition table
         current    - describe the current disk
         format     - format and analyze the disk
         fdisk      - run the fdisk program
         repair     - repair a defective sector
         show       - translate a disk address
         label      - write label to the disk
         analyze    - surface analysis
         defect     - defect list management
         backup     - search for backup labels
         verify     - read and display labels
         save       - save new disk/partition definitions
         volname    - set 8-character volume name
         !<cmd>     - execute <cmd>, then return
         quit
format> part


PARTITION MENU:
         0      - change `0' partition
         1      - change `1' partition
         2      - change `2' partition
         3      - change `3' partition
         4      - change `4' partition
         5      - change `5' partition
         6      - change `6' partition
         7      - change `7' partition
         select - select a predefined table
         modify - modify a predefined partition table
         name   - name the current table
         print  - display the current table
         label  - write partition map and label to the disk
         !<cmd> - execute <cmd>, then return
         quit
partition> pri
Current partition table (original):
Total disk cylinders available: 22166 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders         Size Blocks
   0       root    wm    1113 -  3704        1.17GB (2592/0/0)   2449440
   1       swap    wu       3 -  1112      512.18MB (1110/0/0)   1048950
   2     backup    wm       0 - 22166        9.99GB (22167/0/0) 20947815
   3 unassigned    wm       0                0 (0/0/0)            0
   4 unassigned    wm       0                0 (0/0/0)            0
   5 unassigned    wm       0                0 (0/0/0)            0
   6 unassigned    wm       0                0 (0/0/0)            0
   7       home    wm    3705 - 22166        8.32GB (18462/0/0) 17446590
   8       boot    wu       0 -     0        0.46MB (1/0/0)          945
   9 alternates    wu       1 -     2        0.92MB (2/0/0)         1890




^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
       [not found] <mailman.125.1524526228.3788.tuhs@minnie.tuhs.org>
@ 2018-04-23 23:44 ` Johnny Billquist
  2018-04-23 23:57   ` Steve Nickolas
                     ` (5 more replies)
  0 siblings, 6 replies; 105+ messages in thread
From: Johnny Billquist @ 2018-04-23 23:44 UTC (permalink / raw)


On 2018-04-24 01:30, Grant Taylor <gtaylor at tnetconsulting.net> wrote:

> On 04/23/2018 04:15 PM, Warner Losh wrote:
>> It's weird. These days lower LBAs perform better on spinning drives.
>> We're seeing about 1.5x better performance on the first 30% of a drive
>> than on the last 30%, at least for read speeds for video streaming....
> I think manufacturers have switched things around on us.  I'm used to
> higher LBA numbers being on the outside of the disk.  But I've seen
> anecdotal indicators that the opposite is now true.

That must have been somewhere in the middle of history in that case. Old 
(proper) drives had/have track 0 at the outer edge. The disk loaded the 
heads after spin up, and that was at the outer edge, and then you just 
locked on to track 0, which should be near.
Heads had to be retracted for the disk pack to be replaced.

But this whole optimization for swap based on transfer speeds makes no 
sense to me. The dominating factor in spinning rust is seek times, and 
not transfer speed. If you place the swap at one end of the disk, it 
won't matter much that transfers will be faster, as seek times will on 
average be much longer, and that will eat up any transfer gain ten times 
over before even thinking. (Unless all your disk ever does is swapping, 
at which time the heads can stay around the swapping area all the time.)

Which is also why the file system for RSX (ODS-1) placed the index file 
(equivalent of the inode table) at the middle of the disk by default.

Not sure if Unix did that optimization, but I would hope so. (Never dug 
into that part of the code.)

   Johnny

-- 
Johnny Billquist                  || "I'm on a bus
                                   ||  on a psychedelic trip
email: bqt at softjar.se             ||  Reading murder books
pdp is alive!                     ||  tryin' to stay hip" - B. Idol


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-23 22:15                   ` Warner Losh
@ 2018-04-23 23:30                     ` Grant Taylor
  2018-04-24  9:37                       ` Michael Kjörling
  0 siblings, 1 reply; 105+ messages in thread
From: Grant Taylor @ 2018-04-23 23:30 UTC (permalink / raw)


On 04/23/2018 04:15 PM, Warner Losh wrote:
> It's weird. These days lower LBAs perform better on spinning drives. 
> We're seeing about 1.5x better performance on the first 30% of a drive 
> than on the last 30%, at least for read speeds for video streaming....

I think manufacturers have switched things around on us.  I'm used to 
higher LBA numbers being on the outside of the disk.  But I've seen 
anecdotal indicators that the opposite is now true.



-- 
Grant. . . .
unix || die

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3982 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180423/e754b7fc/attachment-0001.bin>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-23 18:41 Noel Chiappa
  2018-04-23 19:09 ` Clem Cole
@ 2018-04-23 23:01 ` Dave Horsfall
  2018-04-23 23:49   ` Dave Horsfall
  2018-04-24  0:26   ` Ronald Natalie
  1 sibling, 2 replies; 105+ messages in thread
From: Dave Horsfall @ 2018-04-23 23:01 UTC (permalink / raw)


On Mon, 23 Apr 2018, Noel Chiappa wrote:

> Speaking of fixed-head disks, one of the Bell systems used (IIRC) an 
> RS04 fixed-head disk for the root. DEC apparently only used that disk 
> for swapping in their OS's... So the DEC diagnsotics felt free to 
> scribble on the disk. So, Field Circus comes in to work on the 
> machine... Ooops!

Giggle...  We were probably the only site in Oz with two tape drives on 
our 11/40 at UNSW; the X-ray crystallographers (or was it the molecular 
biologists?  They're all the same to me) used to produce a tape of their 
results on the 360/50, to be read under RSX-11 to go to another tape, 
thence to be plotted on our LV-11 Versatec (and probably the only one in 
the country).

Well, this would never do, of course, as it required us to take down our 
Unix box (one of the first in Oz) to run RSX, so my then-boss wrote a 
program (in FORTRAN) so we could do it under Unix; said LV-11 actually 
spent most of its time plotting biorhythm charts with a program written by 
said boss (who also gave me my first taste of grass, and I hated it).

Anyway...

Elec Eng (our foe for some weird reason, as the CSU was regarded as "The 
Man") left a "valuable" tape online, after using our plotter for some 
unrelated reason.

DEC Field Circus then took the box, as they were waiting for the monthly 
PM.

Can you guess the rest?

Let's just say that Elec Eng quickly learned what the "write ring" did...

Hey, is Peter Ivanov on this list?  He was the one who moaned to me about 
us writing over an unlabelled write-enabled tape...

-- 
Dave Horsfall DTM (VK2KFU)  "Those who don't understand security will suffer."


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-23 22:07                 ` Tim Bradshaw
@ 2018-04-23 22:15                   ` Warner Losh
  2018-04-23 23:30                     ` Grant Taylor
  2018-04-23 23:45                   ` Arthur Krewat
  1 sibling, 1 reply; 105+ messages in thread
From: Warner Losh @ 2018-04-23 22:15 UTC (permalink / raw)


On Mon, Apr 23, 2018 at 4:07 PM, Tim Bradshaw <tfb at tfeb.org> wrote:

> On 23 Apr 2018, at 21:47, Grant Taylor via TUHS <tuhs at minnie.tuhs.org>
> wrote:
> > I had always wondered where Solaris (SunOS) got it's use of the
> different slices, including the slice that was the entire disk from.
> >
> > Now I'm guessing Solaris got it from SunOS which got it from 4.x BSD.
> >
>
>
> There is a wonderful Sun cretinism about this. At some recent time (I am
> not sure how recent but probably mid 2000s), someone worked out that you
> wanted swap to be at one end of the disk (I think the outside) because on
> modern disks the data rate changes across the disk and you wanted it at the
> end with the highest data rate.  But lots of things knew that swap was on
> s1, the second partition.  So they changed the default installation tool so
> the slices of the disk were out of order: s1 was the first, s0 the second,
> s2 was the whole disk (which it already was) and so on.  This was
> enormously entertaining in a bad way if you made the normal assumption that
> the slices were in order.  There was also (either then or before) some
> magic needed such that swapping never touched the first n blocks of the
> disk where the label and boot blocks were, and it was possible to get this
> wrong so the machine would happily boot, run but would then fail to boot
> again, usually at a most inconvenient time.
>
> And the cretinism was that this was mid 2000s:  if you had machine that
> was paging the answer was to buy more memory not to arrange for faster swap
> space: it was solving a problem that nobody had any more.
>

It's weird. These days lower LBAs perform better on spinning drives. We're
seeing about 1.5x better performance on the first 30% of a drive than on
the last 30%, at least for read speeds for video streaming....

Warner
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180423/35d66b16/attachment.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-23 22:01 Noel Chiappa
@ 2018-04-23 22:09 ` Warner Losh
  0 siblings, 0 replies; 105+ messages in thread
From: Warner Losh @ 2018-04-23 22:09 UTC (permalink / raw)


On Mon, Apr 23, 2018 at 4:01 PM, Noel Chiappa <jnc at mercury.lcs.mit.edu>
wrote:
>
>
> No, in V6 filesystems, block numbers (in inodes, etc - also the file system
> size in the superblock) were only 16 bits, so a 50MB disk (100K blocks)
> had to
> be split up into partitions to use it all. True of the RP03/04 in V6 too
> (see
> the man page above).
>

Venix 86 2.0 has a daddr_t of 16-bit as well. Venix is V7 based and is
limted to 32MB. It looks like the filesystem includes this limitation as
well. The winchester driver doesn't have the partitioning trick that the r*
drivers form the pdp-11, it seems.

Warner
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180423/cfac0ea8/attachment.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-23 20:47               ` Grant Taylor
  2018-04-23 21:06                 ` Clem Cole
@ 2018-04-23 22:07                 ` Tim Bradshaw
  2018-04-23 22:15                   ` Warner Losh
  2018-04-23 23:45                   ` Arthur Krewat
  1 sibling, 2 replies; 105+ messages in thread
From: Tim Bradshaw @ 2018-04-23 22:07 UTC (permalink / raw)


On 23 Apr 2018, at 21:47, Grant Taylor via TUHS <tuhs at minnie.tuhs.org> wrote:
> I had always wondered where Solaris (SunOS) got it's use of the different slices, including the slice that was the entire disk from.
> 
> Now I'm guessing Solaris got it from SunOS which got it from 4.x BSD.
> 


There is a wonderful Sun cretinism about this. At some recent time (I am not sure how recent but probably mid 2000s), someone worked out that you wanted swap to be at one end of the disk (I think the outside) because on modern disks the data rate changes across the disk and you wanted it at the end with the highest data rate.  But lots of things knew that swap was on s1, the second partition.  So they changed the default installation tool so the slices of the disk were out of order: s1 was the first, s0 the second, s2 was the whole disk (which it already was) and so on.  This was enormously entertaining in a bad way if you made the normal assumption that the slices were in order.  There was also (either then or before) some magic needed such that swapping never touched the first n blocks of the disk where the label and boot blocks were, and it was possible to get this wrong so the machine would happily boot, run but would then fail to boot again, usually at a most inconvenient time.

And the cretinism was that this was mid 2000s:  if you had machine that was paging the answer was to buy more memory not to arrange for faster swap space: it was solving a problem that nobody had any more.


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
@ 2018-04-23 22:01 Noel Chiappa
  2018-04-23 22:09 ` Warner Losh
  0 siblings, 1 reply; 105+ messages in thread
From: Noel Chiappa @ 2018-04-23 22:01 UTC (permalink / raw)


    > From: Clem Cole

    > To be honest, I really don't remember - but I know we used letters for
    > the different partitions on the 11/70 before BSD showed up.

In V6 (and probably before that, too), it was numbers:

  http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/man/man4/rp.4

So on my machine which had 2 x 50MB CalChomps, with a Diva controller, which
we had to split up into two partition each (below), they were dv00, dv01, dv10
and dv11. Letters for the partitions made it easier...

    > The reason for the partition originally was (and it must have been 6th
    > edition when I first saw it), DEC finally made a disk large enough that
    > number of blocks overflowed a 16 bit integer.  So splitting the disk
    > into smaller partitions allowed the original seek(2) to work without
    > overflow.

No, in V6 filesystems, block numbers (in inodes, etc - also the file system
size in the superblock) were only 16 bits, so a 50MB disk (100K blocks) had to
be split up into partitions to use it all. True of the RP03/04 in V6 too (see
the man page above).

	Noel


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-23 21:14                   ` Dan Mick
@ 2018-04-23 21:27                     ` Clem Cole
  0 siblings, 0 replies; 105+ messages in thread
From: Clem Cole @ 2018-04-23 21:27 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 3829 bytes --]

On Mon, Apr 23, 2018 at 5:14 PM, Dan Mick <danmick at gmail.com> wrote:

> On 04/23/2018 02:06 PM, Clem Cole wrote:
> >
> >
> > On Mon, Apr 23, 2018 at 4:47 PM, Grant Taylor via TUHS
> > <tuhs at minnie.tuhs.org <mailto:tuhs at minnie.tuhs.org>> wrote:
> >
> >     On 04/23/2018 11:51 AM, Clem Cole wrote:
> >
> >         By the time of 4.X, the RP06 was 'partitioned' into 'rings'
> >         (some overlapping).  The 'a' partition was root, the 'b' was
> >         swap and one fo the others was the rest.  Later the 'c' was a
> >         short form for copying the entire disk.
> >
> >
> >     I had always wondered where Solaris (SunOS) got it's use of the
> >     different slices, including the slice that was the entire disk from.
> >
> >     Now I'm guessing Solaris got it from SunOS which got it from 4.x BSD
> >
> > ​It was not BSD - it was research.  It may have been in 6th, but it was
> > definitely in 7th.  Cut/pasted from the V7 PDP-11 rp(4) man page:
> >
> >     *NAME*
> >
> >         rp − RP-11/RP03 moving-head disk
> >
> >     *DESCRIPTION*
> >
> >         The files rp0 ... rp7 refer to sections of RP disk drive 0. The
> >         files rp8 ... rp15 refer to drive 1 etc. This
> >
> >         allows a large disk to be broken up into more manageable pieces.
> >
> >         The origin and size of the pseudo-disks on each drive are as
> >         follows:
> >
> >             disk start length
> >
> >             0 0 81000
> >
> >             1 0 5000
> >
> >             2 5000 2000
> >
> >             3 7000 74000
> >
> >             4-7 unassigned
> >
> >         Thus rp0 covers the whole drive, while rp1, rp2, rp3 can serve
> >         usefully as a root, swap, and mounted user
> >
> >         file system respectively.
> >
> >         The rp files access the disk via the system’s normal buffering
> >         mechanism and may be read and written
> >
> >         without regard to physical disk records. There is also a ‘raw’
> >         interface which provides for direct transmission
> >
> >         between the disk and the user’s read or write buffer. A single
> >         read or write call results in exactly one
> >
> >         I/O operation and therefore raw I/O is considerably more
> >         efficient when many words are transmitted. The
> >
> >         names of the raw RP files begin with rrp and end with a number
> >         which selects the same disk section as the
> >
> >         corresponding rp file.
> >
> >         In raw I/O the buffer must begin on a word boundary.​
> >
> > ᐧ
>
> But...that has numbers, not letters, and the third partition is not the
> whole drive, the first one is....?
>

Yup -- disk were pretty expensive in those days ($20-30K for a <100M drive)
so often people did not have more than one.  So they started with rp1, rp2
etc..
As disks dropped a little cheaper and having more than one RP06 became
possible (RP06 aka IBM 3330 - project Winchester --  was a huge 200M drive
- we had 3 on the Teklabs machine and that was considered very, very
generous), then letters became the convention used in /dev/.  i.e.
/dev/{,r}rp0{a,b,c,d..} for each of the minor numbers.

To be honest, I really don't remember - but I know we used letters for the
different partitions on the 11/70 before BSD showed up.

The reason for the partition originally was (and it must have been 6th
edition when I first saw it), DEC finally made a disk large enough that
number of blocks overflowed a 16 bit integer.   So splitting the disk into
smaller partitions allowed the original seek(2) to work without overflow.
V7 introduced lseek(2) when the offset was a long.

Clem

​

ᐧ
ᐧ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180423/faf9548e/attachment.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-23 21:06                 ` Clem Cole
@ 2018-04-23 21:14                   ` Dan Mick
  2018-04-23 21:27                     ` Clem Cole
  0 siblings, 1 reply; 105+ messages in thread
From: Dan Mick @ 2018-04-23 21:14 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 2481 bytes --]

On 04/23/2018 02:06 PM, Clem Cole wrote:
> 
> 
> On Mon, Apr 23, 2018 at 4:47 PM, Grant Taylor via TUHS
> <tuhs at minnie.tuhs.org <mailto:tuhs at minnie.tuhs.org>> wrote:
> 
>     On 04/23/2018 11:51 AM, Clem Cole wrote:
> 
>         By the time of 4.X, the RP06 was 'partitioned' into 'rings'
>         (some overlapping).  The 'a' partition was root, the 'b' was
>         swap and one fo the others was the rest.  Later the 'c' was a
>         short form for copying the entire disk.
> 
> 
>     I had always wondered where Solaris (SunOS) got it's use of the
>     different slices, including the slice that was the entire disk from.
> 
>     Now I'm guessing Solaris got it from SunOS which got it from 4.x BSD
> 
> ​It was not BSD - it was research.  It may have been in 6th, but it was
> definitely in 7th.  Cut/pasted from the V7 PDP-11 rp(4) man page:
> 
>     *NAME*
> 
>         rp − RP-11/RP03 moving-head disk
> 
>     *DESCRIPTION*
> 
>         The files rp0 ... rp7 refer to sections of RP disk drive 0. The
>         files rp8 ... rp15 refer to drive 1 etc. This
> 
>         allows a large disk to be broken up into more manageable pieces.
> 
>         The origin and size of the pseudo-disks on each drive are as
>         follows:
> 
>             disk start length
> 
>             0 0 81000
> 
>             1 0 5000
> 
>             2 5000 2000
> 
>             3 7000 74000
> 
>             4-7 unassigned
> 
>         Thus rp0 covers the whole drive, while rp1, rp2, rp3 can serve
>         usefully as a root, swap, and mounted user
> 
>         file system respectively.
> 
>         The rp files access the disk via the system’s normal buffering
>         mechanism and may be read and written
> 
>         without regard to physical disk records. There is also a ‘raw’
>         interface which provides for direct transmission
> 
>         between the disk and the user’s read or write buffer. A single
>         read or write call results in exactly one
> 
>         I/O operation and therefore raw I/O is considerably more
>         efficient when many words are transmitted. The
> 
>         names of the raw RP files begin with rrp and end with a number
>         which selects the same disk section as the
> 
>         corresponding rp file.
> 
>         In raw I/O the buffer must begin on a word boundary.​
> 
> ᐧ

But...that has numbers, not letters, and the third partition is not the
whole drive, the first one is....?



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-23 20:47               ` Grant Taylor
@ 2018-04-23 21:06                 ` Clem Cole
  2018-04-23 21:14                   ` Dan Mick
  2018-04-23 22:07                 ` Tim Bradshaw
  1 sibling, 1 reply; 105+ messages in thread
From: Clem Cole @ 2018-04-23 21:06 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 2049 bytes --]

On Mon, Apr 23, 2018 at 4:47 PM, Grant Taylor via TUHS <tuhs at minnie.tuhs.org
> wrote:

> On 04/23/2018 11:51 AM, Clem Cole wrote:
>
>> By the time of 4.X, the RP06 was 'partitioned' into 'rings' (some
>> overlapping).  The 'a' partition was root, the 'b' was swap and one fo the
>> others was the rest.  Later the 'c' was a short form for copying the entire
>> disk.
>>
>
> I had always wondered where Solaris (SunOS) got it's use of the different
> slices, including the slice that was the entire disk from.
>
> Now I'm guessing Solaris got it from SunOS which got it from 4.x BSD

​It was not BSD - it was research.  It may have been in 6th, but it was
definitely in 7th.  Cut/pasted from the V7 PDP-11 rp(4) man page:

*NAME*

rp − RP-11/RP03 moving-head disk

*DESCRIPTION*

The files rp0 ... rp7 refer to sections of RP disk drive 0. The files rp8
... rp15 refer to drive 1 etc. This

allows a large disk to be broken up into more manageable pieces.

The origin and size of the pseudo-disks on each drive are as follows:

disk start length

0 0 81000

1 0 5000

2 5000 2000

3 7000 74000

4-7 unassigned

Thus rp0 covers the whole drive, while rp1, rp2, rp3 can serve usefully as
a root, swap, and mounted user

file system respectively.

The rp files access the disk via the system’s normal buffering mechanism
and may be read and written

without regard to physical disk records. There is also a ‘raw’ interface
which provides for direct transmission

between the disk and the user’s read or write buffer. A single read or
write call results in exactly one

I/O operation and therefore raw I/O is considerably more efficient when
many words are transmitted. The

names of the raw RP files begin with rrp and end with a number which
selects the same disk section as the

corresponding rp file.

In raw I/O the buffer must begin on a word boundary.​

ᐧ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180423/c65f3098/attachment-0001.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-23 17:51             ` Clem Cole
  2018-04-23 18:30               ` Ron Natalie
@ 2018-04-23 20:47               ` Grant Taylor
  2018-04-23 21:06                 ` Clem Cole
  2018-04-23 22:07                 ` Tim Bradshaw
  1 sibling, 2 replies; 105+ messages in thread
From: Grant Taylor @ 2018-04-23 20:47 UTC (permalink / raw)


On 04/23/2018 11:51 AM, Clem Cole wrote:
> By the time of 4.X, the RP06 was 'partitioned' into 'rings' (some 
> overlapping).  The 'a' partition was root, the 'b' was swap and one fo 
> the others was the rest.  Later the 'c' was a short form for copying 
> the entire disk.

I had always wondered where Solaris (SunOS) got it's use of the 
different slices, including the slice that was the entire disk from.

Now I'm guessing Solaris got it from SunOS which got it from 4.x BSD.



-- 
Grant. . . .
unix || die

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3982 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180423/4b59e732/attachment.bin>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-23 18:41 Noel Chiappa
@ 2018-04-23 19:09 ` Clem Cole
  2018-04-23 23:01 ` Dave Horsfall
  1 sibling, 0 replies; 105+ messages in thread
From: Clem Cole @ 2018-04-23 19:09 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1145 bytes --]

On Mon, Apr 23, 2018 at 2:41 PM, Noel Chiappa <jnc at mercury.lcs.mit.edu>
wrote:

>   Speaking of fixed-head disks, one of the Bell systems used (IIRC) an RS04
> fixed-head disk for the root. DEC apparently only used that disk for
> swapping
> in their OS's... So the DEC diagnsotics felt free to scribble on the disk.
> So, Field Circus comes in to work on the machine... Ooops!
>
​teklabs 11/70 the RS04 was /tmp    not quite an dangerous as root, but
since the fs got wiped out the system would not boot - as it would fail
trying to mount /tmp.

IIRC the it was the field exerciser that did that by default.   I seem to
remember that you could configure it to use something else like the field
service pack itself or may be not disk at all, but if I remember right if
the exerciser found an RS04 it thought it was available by default.   So, I
had a big sticker on the FS pack that said see me before you loaded it
after the first time the service guys took out /tmp.
ᐧ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180423/01997811/attachment.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
@ 2018-04-23 18:41 Noel Chiappa
  2018-04-23 19:09 ` Clem Cole
  2018-04-23 23:01 ` Dave Horsfall
  0 siblings, 2 replies; 105+ messages in thread
From: Noel Chiappa @ 2018-04-23 18:41 UTC (permalink / raw)


    > From: "Ron Natalie"

    > I'm pretty sure that swapping in V6 took place to a major/minor number
    > configured at kernel build time.

Yup, in c.c, along with the block/character device switches (which converted
major device numbers to routines).

    > You could create a dev node for the swap device, but it wasn't used for
    > the actual swapping.

Yes.

    > We actually dedicated a full 1024 block RF11 fixed head to the system in
    > the early days

Speaking of fixed-head disks, one of the Bell systems used (IIRC) an RS04
fixed-head disk for the root. DEC apparently only used that disk for swapping
in their OS's... So the DEC diagnsotics felt free to scribble on the disk.
So, Field Circus comes in to work on the machine... Ooops!

    Noel



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-23 17:51             ` Clem Cole
@ 2018-04-23 18:30               ` Ron Natalie
  2018-04-25 14:02                 ` Tom Ivar Helbekkmo
  2018-04-23 20:47               ` Grant Taylor
  1 sibling, 1 reply; 105+ messages in thread
From: Ron Natalie @ 2018-04-23 18:30 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1338 bytes --]

RK05’s were 4872 blocks.    Don’t know why that number has stuck with me, too many invocations of mkfs I guess.    Oddly, DEC software for some reason never used the last 72 blocks.   We used to the standalone ROLLIN to make backup copies of our root file system and I remember having to patch the program to make sure it copied all the blocks.

 

Yeah, I remembrer swaplo and swapdev.   We actually dedicated a full 1024 block RF11 fixed head to the system in the early days until we got the system of RK05 packs.    I remember buying my personal RK05 pack for $75 thinking that was a lot of personal storage (2.4MB!). 

 

Amusingly, JHU believe in vol copy for backups.   Our 80 Mb drive was divided into multiples of RK05 pack.    The root file system was 5x4872 blocks.   The main user file sytem was another 5.   The rest was divided up into single RK05 size things.    I remember writing a standalone volcopy to replace ROLLIN that was called “The Fabulous Amtork” program.   The 80M drive was an Ampex.    AM to RK, get it.   I have no idea when or why the adjective “fabulous” became associated with it but that was what it was always called.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180423/f731a05d/attachment-0001.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-23 17:30           ` Ron Natalie
@ 2018-04-23 17:51             ` Clem Cole
  2018-04-23 18:30               ` Ron Natalie
  2018-04-23 20:47               ` Grant Taylor
  0 siblings, 2 replies; 105+ messages in thread
From: Clem Cole @ 2018-04-23 17:51 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1364 bytes --]

On Mon, Apr 23, 2018 at 1:30 PM, Ron Natalie <ron at ronnatalie.com> wrote:

>
>
>
>
> Ø  Well, I had known but forgotten in fact.  There's also a distinction
> between whether a system swaps/pages onto a dedicated device and whether it
> exposes that device by some special name in /dev.
>
>
>
> I’m pretty sure that swapping in V6 took place to a major/minor number
> configured at kernel build time.   You could create a dev node for the swap
> device, but it wasn’t used for the actual swapping.
>

​Exactly...   For instance an RK04 was less that 5K blocks (4620 or some
such - I've forgotten the actually amount).  The disk was mkfs'ed to the
first 4K and the left over was give to the swap system.   By the time of
4.X, the RP06 was 'partitioned' into 'rings' (some overlapping).  The 'a'
partition was root, the 'b' was swap and one fo the others was the rest.
Later the 'c' was a short form for copying the entire disk.

What /dev/drum did was allow to cobble up those hunks of reserved space and
view them as a single device.   I've forgotten what user space programs
used it, probably some of the tools like ps, vmstat *etc*.   You'd have to
look that the sources.​

>
>
>
ᐧ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180423/b2b7146b/attachment.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-23 16:42         ` Tim Bradshaw
@ 2018-04-23 17:30           ` Ron Natalie
  2018-04-23 17:51             ` Clem Cole
  0 siblings, 1 reply; 105+ messages in thread
From: Ron Natalie @ 2018-04-23 17:30 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 583 bytes --]

 

 

Ø  Well, I had known but forgotten in fact.  There's also a distinction
between whether a system swaps/pages onto a dedicated device and whether it
exposes that device by some special name in /dev.  

 

I’m pretty sure that swapping in V6 took place to a major/minor number
configured at kernel build time.   You could create a dev node for the swap
device, but it wasn’t used for the actual swapping.

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180423/ac648753/attachment.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-22 17:37       ` Clem Cole
                           ` (2 preceding siblings ...)
  2018-04-22 21:51         ` Dave Horsfall
@ 2018-04-23 16:42         ` Tim Bradshaw
  2018-04-23 17:30           ` Ron Natalie
  3 siblings, 1 reply; 105+ messages in thread
From: Tim Bradshaw @ 2018-04-23 16:42 UTC (permalink / raw)


On 22 Apr 2018, at 18:37, Clem Cole <clemc at ccc.com> wrote:
> 
> But the term 'core file' stuck, tools knew about, as did the programmers.   The difference is that todays systems from Windows to UNIX flavors stopped needed a dedicated swapping or paging space and instead was taught to just use empty FS blocks.  So today's hacker has grown up without really knowing what /dev/swap or /dev/drum was all about -- in fact that was exactly the question that started this thread. 

Well, I had known but forgotten in fact.  There's also a distinction between whether a system swaps/pages onto a dedicated device and whether it exposes that device by some special name in /dev.  Solaris does (or did until fairly recently: I don't remember what the ZFSy systems do) generally use one or more special devices (not usually whole disks but they could be, and it could swap on files but not, I assume, write crash dumps to them) but I'm pretty sure they were not exposed as /dev/<somethng> other than the name they would already have in /dev/(r)dsk.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180423/5ff61877/attachment.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-22 20:58         ` Mutiny
@ 2018-04-22 22:37           ` Clem cole
  0 siblings, 0 replies; 105+ messages in thread
From: Clem cole @ 2018-04-22 22:37 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 2787 bytes --]

Im not sure how you got there. I hardly said or implied that Unix started in 78 or on the Vax.  I’m fairly aware that Ken had core in his PDP-7, as did many of us (including myself) on our many of PDP-11s in the old times.  I started with Fifth and Sixth Edition.  Dumping core had meaning to all of us in those days. 

All I said was that by the time of the VAX - which very much did spread the gospel of Unix to the world -  core memory had fallen from favor and widespread use.   The PDP11 was the last system DEC released core.  

BTW if you want to be correct about dates - the DEC released the Vax in 76 not 78 ( I personally used to program Vax serial #1 at CMU under VMS 1.0 before I was at UCB which is what Dan had asked). Also FWIW. Bill and Ozzie whom I knew from my UCB days, did not do the UCB Vax work until a few years later when the UCB EECS Dept got their first Berkeley VAX and Prof Fateman who had come from MIT wanted to run MAC Lisp on it - which very much required VM support.  V32 ( the V7 port for the Vax from AT&T) swapped.  Bill and Ozzie added paging support which was released as BSD 3.0 (Ozzie graduates and left for Cornell as I recall but where he went is fuzzy in my memory now). Primarily Bill but with the help of us others followed quickly with 4.0 and 4.1 (the later being the first wide spread Vax release) with his Fastvax support. Later still in the early 1980s Bill lead the CRSG team and released 4.1a/b/c.  Bill left for Sun and I left for Masscomp.  Kirk, Sam and Keith pushed out 4.2 et al to the final NET2 release which was after my time.  

Of those on the list from UCB which was my time there was also Mary Ann Horton in EECS and Debbie S. who was up the hill at LBL and who both also sometimes reply - I do look to them correct/affirm me for UCB history.  

But I personally go back in Unix history to the early/mid 1970s at CMU.  I often defer to Noel C for memories of networking things which he was a part at MIT from the same time frame.   I defer to Doug, Steve and obviously Ken for things at ATT and history before Fifth Edition. 

Sent from my PDP-7 Running UNIX V0 expect things to be almost but not quite. 

> On Apr 22, 2018, at 4:58 PM, Mutiny <mutiny.mutiny at rediffmail.com> wrote:
> 
> From: Clem Cole <clemc at ccc.com>
> Sent: Sun, 22 Apr 2018 23:08:32
> Thus, the UNIX term 'core dump' was really meaningless.
> UNIX didn't started in 1978 with the VAX and solid state ram. It started earlier when core was cheaper than ss ram in the dec world which was the case even in '77. Yes there was core, and yes, core dump isn't meaningless at all.
> ...
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180422/5357fe3c/attachment.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-22 17:37       ` Clem Cole
  2018-04-22 19:14         ` Bakul Shah
  2018-04-22 20:58         ` Mutiny
@ 2018-04-22 21:51         ` Dave Horsfall
  2018-04-25  1:27           ` Dan Stromberg
  2018-04-23 16:42         ` Tim Bradshaw
  3 siblings, 1 reply; 105+ messages in thread
From: Dave Horsfall @ 2018-04-22 21:51 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 758 bytes --]

On Sun, 22 Apr 2018, Clem Cole wrote:

> The difference is that todays systems from Windows to UNIX flavors
> stopped needed a dedicated swapping or paging space and instead was 
> taught to just use empty FS blocks.  So today's hacker has grown up 
> without really knowing what /dev/swap or /dev/drum was all about -- in 
> fact that was exactly the question that started this thread. 

Heh heh...  I remember the day that Basser (SydU) got AUSAM running on 
their spanking new VAX.  It crashed, and it was traced to it swapping for 
the first time in its life (before that, it merely paged).

Now, how many youngsters know the difference between paging and swapping?

-- 
Dave Horsfall DTM (VK2KFU)  "Those who don't understand security will suffer."


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-22 17:37       ` Clem Cole
  2018-04-22 19:14         ` Bakul Shah
@ 2018-04-22 20:58         ` Mutiny
  2018-04-22 22:37           ` Clem cole
  2018-04-22 21:51         ` Dave Horsfall
  2018-04-23 16:42         ` Tim Bradshaw
  3 siblings, 1 reply; 105+ messages in thread
From: Mutiny @ 2018-04-22 20:58 UTC (permalink / raw)


From: Clem Cole &lt;clemc@ccc.com&gt;Sent: Sun, 22 Apr 2018 23:08:32Thus, the UNIX term &#39;core dump&#39; was really meaningless.UNIX didn&#39;t started in 1978 with the VAX and solid state ram. It started earlier when core was cheaper than ss ram in the dec world which was the case even in &#39;77. Yes there was core, and yes, core dump isn&#39;t meaningless@all....
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180422/b8c75643/attachment.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-22 17:37       ` Clem Cole
@ 2018-04-22 19:14         ` Bakul Shah
  2018-04-22 20:58         ` Mutiny
                           ` (2 subsequent siblings)
  3 siblings, 0 replies; 105+ messages in thread
From: Bakul Shah @ 2018-04-22 19:14 UTC (permalink / raw)


On Sun, 22 Apr 2018 13:37:27 -0400 Clem Cole <clemc at ccc.com> wrote:
> available to page itself.   it really is just like the fact that by the
> time of the VAX, DEC was not shipping core memories at all (and few 11's
> shipped with core either as the thanks to Moore's law, the price of
> semiconductor memory had dropped), so calling the main system memory 'core'
> was obsolete.   Thus, the UNIX term 'core dump' was really meaningless.
> [In fact, Magic, the OS for the Tektronix Magnolia Machine has 'mos dump'
> files - because I did that].

I have wondered if the word "kernel" got used instead of
"core" for the essential part of an OS because core was
already in use in relation to memory (and with a different
derivation). "kernel" may have been used in this context in
the '60s but I don't really know who used it first.


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
@ 2018-04-22 18:06 Norman Wilson
  0 siblings, 0 replies; 105+ messages in thread
From: Norman Wilson @ 2018-04-22 18:06 UTC (permalink / raw)


Clem Cole:

  On the other hand, we still 'dump core' and use the core files for
  debugging.  So, while the term 'drum' lost its meaning, 'core file' - might
  be considered 'quaint' by todays hacker, it still has meaning.

====

Just as we still speak of dialling and hanging up the phone,
electing Presidents, and other actions long since made obsolete
by changes of technology and culture.

Norman Wilson
Toronto ON


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-22 17:01     ` Lars Brinkhoff
@ 2018-04-22 17:37       ` Clem Cole
  2018-04-22 19:14         ` Bakul Shah
                           ` (3 more replies)
  0 siblings, 4 replies; 105+ messages in thread
From: Clem Cole @ 2018-04-22 17:37 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1925 bytes --]

On Sun, Apr 22, 2018 at 1:01 PM, Lars Brinkhoff <lars at nocrew.org> wrote:

> Dan Cross wrote:
> > Why was it called drum? I imagine that's historical license coupled
> > with grad student imagination, but I'm curious if it has origin in
> > actual hardware used at UC Berkeley. Clem, that was roughly your era,
> > was it not?
>
> Seems like the Project Genie 940 at UCB had a drum.  Maybe someone
> wanted to carry the tradition forward.


​The 'someone' in all of this was Bill Joy (wnj).​  As I said, in those
days, all of us knew of older systems that used 'paging drums' - it was
pretty common term for the hunk-a-storage that the system dedicated to be
available to page itself.   it really is just like the fact that by the
time of the VAX, DEC was not shipping core memories at all (and few 11's
shipped with core either as the thanks to Moore's law, the price of
semiconductor memory had dropped), so calling the main system memory 'core'
was obsolete.   Thus, the UNIX term 'core dump' was really meaningless.
[In fact, Magic, the OS for the Tektronix Magnolia Machine has 'mos dump'
files - because I did that].

But the term 'core file' stuck, tools knew about, as did the programmers.
 The difference is that todays systems from Windows to UNIX flavors stopped
needed a dedicated swapping or paging space and instead was taught to just
use empty FS blocks.  So today's hacker has grown up without really knowing
what /dev/swap or /dev/drum was all about -- in fact that was exactly the
question that started this thread.

On the other hand, we still 'dump core' and use the core files for
debugging.  So, while the term 'drum' lost its meaning, 'core file' - might
be considered 'quaint' by todays hacker, it still has meaning.

Clem

ᐧ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180422/e829ef12/attachment.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-20 16:12   ` Dan Cross
  2018-04-20 16:21     ` Clem Cole
  2018-04-20 16:33     ` Warner Losh
@ 2018-04-22 17:01     ` Lars Brinkhoff
  2018-04-22 17:37       ` Clem Cole
  2 siblings, 1 reply; 105+ messages in thread
From: Lars Brinkhoff @ 2018-04-22 17:01 UTC (permalink / raw)


Dan Cross wrote:
> Why was it called drum? I imagine that's historical license coupled
> with grad student imagination, but I'm curious if it has origin in
> actual hardware used at UC Berkeley. Clem, that was roughly your era,
> was it not?

Seems like the Project Genie 940 at UCB had a drum.  Maybe someone
wanted to carry the tradition forward.


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-20 23:35   ` Dave Horsfall
@ 2018-04-22 11:48     ` Steve Simon
  0 siblings, 0 replies; 105+ messages in thread
From: Steve Simon @ 2018-04-22 11:48 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1152 bytes --]

i remember the interdata i started my unix career on. it had a halt light which was a good indication of cpu inactivity. we planned (though never did) to attach a light dependant resistor a big cap and an old brass meter in a wood and glass case so we could have a measure of “bored” to “thrashed” in true frankinstien’s lab style.

happy days
-Steve


> On 21 Apr 2018, at 00:35, Dave Horsfall <dave at horsfall.org> wrote:
> 
>> On Fri, 20 Apr 2018, William Pechter wrote:
>> 
>> Also had a write protect like every good drive should.
> 
> Not only that, but I note that my Mac does not have a visible disk-activity indicator; not surprising, really, as it seems to be thrashing like hell whenever I have the temerity to focus (i.e. click) upon another window.
> 
> Now, my Mac's system disk is external (don't ask), and it has a barely visible blue LED, which is why I know when it thrashes sometimes.  I intend to throw together a photo-transistor and a high-efficiency LED (plus a 9v battery) for a more visible "Danger!  Will Robinson!" alert.
> 
> -- 
> Dave Horsfall DTM (VK2KFU)  "Those who don't understand security will suffer."



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-20 17:16 ` William Pechter
@ 2018-04-20 23:35   ` Dave Horsfall
  2018-04-22 11:48     ` Steve Simon
  0 siblings, 1 reply; 105+ messages in thread
From: Dave Horsfall @ 2018-04-20 23:35 UTC (permalink / raw)


On Fri, 20 Apr 2018, William Pechter wrote:

> Also had a write protect like every good drive should.

Not only that, but I note that my Mac does not have a visible 
disk-activity indicator; not surprising, really, as it seems to be 
thrashing like hell whenever I have the temerity to focus (i.e. click) 
upon another window.

Now, my Mac's system disk is external (don't ask), and it has a barely 
visible blue LED, which is why I know when it thrashes sometimes.  I 
intend to throw together a photo-transistor and a high-efficiency LED 
(plus a 9v battery) for a more visible "Danger!  Will Robinson!" alert.

-- 
Dave Horsfall DTM (VK2KFU)  "Those who don't understand security will suffer."


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-20 19:17       ` Ron Natalie
  2018-04-20 20:23         ` Clem Cole
@ 2018-04-20 22:10         ` Dave Horsfall
  1 sibling, 0 replies; 105+ messages in thread
From: Dave Horsfall @ 2018-04-20 22:10 UTC (permalink / raw)


On Fri, 20 Apr 2018, Ron Natalie wrote:

> Drums were usually fixed head, although those UNIVACers out there will 
> remember the Fastrand drums which had flying head.  Massive units looked 
> like 6' lengths of sewer pipe covered in iron oxide. It was sort of the 
> basis of UNIVAC storage units.

Now you have me remembering (always dangerous)...  Our DEC Field Circus 
gingerbeer mentioned that he had to maintain a drum, whereby the drum 
itself slid back and forth, and he used to take the female operators to, 
ahem, view it in action...

-- 
Dave Horsfall DTM (VK2KFU)  "Those who don't understand security will suffer."


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-20 19:17       ` Ron Natalie
@ 2018-04-20 20:23         ` Clem Cole
  2018-04-20 22:10         ` Dave Horsfall
  1 sibling, 0 replies; 105+ messages in thread
From: Clem Cole @ 2018-04-20 20:23 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1742 bytes --]

On Fri, Apr 20, 2018 at 3:17 PM, Ron Natalie <ron at ronnatalie.com> wrote:

>
> Drums were usually fixed head, although those UNIVACers out there will
> remember the Fastrand drums which had flying head.   Massive units looked
> like 6' lengths of sewer pipe covered in iron oxide.
> It was sort of the basis of UNIVAC storage units.
>

​And big and did I say really *noisy* ....  We had 4 of them and they so so
noisy CMU partitioned the machine room so they they were by themselves.
The CPUs and the operators were on the other side of glass wall.    ​I was
looking at an old pic of me in the CMU machine room circa '76.  You can see
console to the 1108 and the top of the door to the 'Fastrand' room (which
was also the Vax serial #1 was for space reasons); but unfortunately one of
the 360/67's 1Meg memory units (we had 4 - each a was a little larger than
Vax 780 cabinet) is in the way, so the drums can not been seen.

One thing I'll never forget was when the first time they powered up CMU's
first KL10 (which was DEC's first ECL based system).    It too was an early
unit and DEC Pittsburgh had not been trained on them yet.   Unfortunately,
the on-site provisioning was mistakenly set up for a KA10, which was
horribly insufficient.   Well, the tech's blew every circuit breaker in the
building - shutting down everything.

The emergency lights came on and room was eerily quiet, not a single fan
running, but there was this strange humming coming from Fastrand room.
There was so much stored energy in the drums, they keep spinning for a very
long time.

ᐧ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180420/22b08357/attachment.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-20 16:33     ` Warner Losh
@ 2018-04-20 19:17       ` Ron Natalie
  2018-04-20 20:23         ` Clem Cole
  2018-04-20 22:10         ` Dave Horsfall
  0 siblings, 2 replies; 105+ messages in thread
From: Ron Natalie @ 2018-04-20 19:17 UTC (permalink / raw)



I don't even remember a drum for any UNIBUS system.   We had a RF-11 fixed head disk on our PDP-11/45 and augmented that with a "bulk core" box that looked like the RF-11 to the system as well.

Drums were usually fixed head, although those UNIVACers out there will remember the Fastrand drums which had flying head.   Massive units looked like 6' lengths of sewer pipe covered in iron oxide.
It was sort of the basis of UNIVAC storage units.




^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-20 16:45 Noel Chiappa
  2018-04-20 16:53 ` Charles Anthony
@ 2018-04-20 17:16 ` William Pechter
  2018-04-20 23:35   ` Dave Horsfall
  1 sibling, 1 reply; 105+ messages in thread
From: William Pechter @ 2018-04-20 17:16 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1862 bytes --]

Noel Chiappa wrote:
>     > From: Warner Losh
>
>     > Drum memory stopped being a new thing in the early 70's.
>
> Mid 60's. Fixed-head disks replaced them - same basic concept, same amount of
> bits, less physical volume. Those lasted until the late 70's - early PDP-11
> Unixes have drivers for the RF11 and RS0x fixed-head disks.
>
> The 'fire-hose' drum on the GE 645 Multics was the last one I've heard
> of. Amusing story about it here:
>
>   http://www.multicians.org/low-bottle-pressure.html
>
> Although reading it, it may not have been (physically) a drum.
>
>     > There never was a drum device, at least a commercial, non-lab
>     > experiment, for the VAXen. They all swapped to spinning disks by then.
>
> s/spinning/non-fixed-head/.
>
> 	Noe
Just a note... back in my old DEC days I went around and installed a
thing called the ML11-A (IIRC).
It was a Massbus box of MK11/MS750 memory arrays interfaced to a disk
emulation controller.

Plug it on your Massbus, format the memory with the disk formatter and
you have something like
an RS04 swap device made out of memory with no rotation delay.  Consider
it an early SSD.

It used plain old PDP/VAX memory (and could format and bad block out ECC
errors so the memory
didn't have to be perfect) and was software compatible (I think) with
the RS04...

I installed one at the AT&T/NY Bell site next door to the World Trade
Center back in 84-85 or so.

I wonder if it was still running on 9/11.  Word was Unix really was
improved on the 11/70 boxes without a huge investment on new gear. 
Quick swap was a big help on heavily loaded boxes.

Loved seeing the diag run tests and light the fault light on the box of
memory.  Also had a write protect like every good drive should.



Bill

-- 
Digital had it then.  Don't you wish you could buy it now!
pechter-at-gmail.com  http://xkcd.com/705/



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-20 16:45 Noel Chiappa
@ 2018-04-20 16:53 ` Charles Anthony
  2018-04-20 17:16 ` William Pechter
  1 sibling, 0 replies; 105+ messages in thread
From: Charles Anthony @ 2018-04-20 16:53 UTC (permalink / raw)


On Fri, Apr 20, 2018 at 9:45 AM, Noel Chiappa <jnc at mercury.lcs.mit.edu>
wrote:

>     > From: Warner Losh
>
>     > Drum memory stopped being a new thing in the early 70's.
>
> Mid 60's. Fixed-head disks replaced them - same basic concept, same amount
> of
> bits, less physical volume. Those lasted until the late 70's - early PDP-11
> Unixes have drivers for the RF11 and RS0x fixed-head disks.
>
> The 'fire-hose' drum on the GE 645 Multics was the last one I've heard
> of. Amusing story about it here:
>
>   http://www.multicians.org/low-bottle-pressure.html
>
> Although reading it, it may not have been (physically) a drum.
>
>
Correct; it was a fixed head disk; nicknamed firehose for the high data
rate. I am not sure if this is the exact model number, but it was a
Librafile device:

https://archive.org/stream/TNM_Librafile_3800_mass_memory_from_Librascope_-__20170825_0139#page/n0/mode/2up

-- Charles
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180420/7c226b54/attachment.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
@ 2018-04-20 16:45 Noel Chiappa
  2018-04-20 16:53 ` Charles Anthony
  2018-04-20 17:16 ` William Pechter
  0 siblings, 2 replies; 105+ messages in thread
From: Noel Chiappa @ 2018-04-20 16:45 UTC (permalink / raw)


    > From: Warner Losh

    > Drum memory stopped being a new thing in the early 70's.

Mid 60's. Fixed-head disks replaced them - same basic concept, same amount of
bits, less physical volume. Those lasted until the late 70's - early PDP-11
Unixes have drivers for the RF11 and RS0x fixed-head disks.

The 'fire-hose' drum on the GE 645 Multics was the last one I've heard
of. Amusing story about it here:

  http://www.multicians.org/low-bottle-pressure.html

Although reading it, it may not have been (physically) a drum.

    > There never was a drum device, at least a commercial, non-lab
    > experiment, for the VAXen. They all swapped to spinning disks by then.

s/spinning/non-fixed-head/.

	Noel


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-20 16:12   ` Dan Cross
  2018-04-20 16:21     ` Clem Cole
@ 2018-04-20 16:33     ` Warner Losh
  2018-04-20 19:17       ` Ron Natalie
  2018-04-22 17:01     ` Lars Brinkhoff
  2 siblings, 1 reply; 105+ messages in thread
From: Warner Losh @ 2018-04-20 16:33 UTC (permalink / raw)


On Fri, Apr 20, 2018 at 10:12 AM, Dan Cross <crossd at gmail.com> wrote:

> That's a bit different. It's possible that some early Unix machines had
> actual drum devices for storage or swap (did any of them?), but the
> /dev/drum device is what Clem says it was.
>
> It's funny, I just happened across this a couple of days ago when I went
> looking for the `hier.7` man page from 4.4BSD-Lite2:
>
> https://www.freebsd.org/cgi/man.cgi?query=hier&apropos=0&
> sektion=7&manpath=4.4BSD+Lite2&arch=default&format=html
>
> It refers to this: https://www.freebsd.org/cgi/man.cgi?query=drum&
> sektion=4&apropos=0&manpath=4.4BSD+Lite2
>
> The claim is that it came from 3.0BSD. Why was it called drum? I imagine
> that's historical license coupled with grad student imagination, but I'm
> curious if it has origin in actual hardware used at UC Berkeley. Clem, that
> was roughly your era, was it not?
>

http://www.tuhs.org/cgi-bin/utree.pl?file=3BSD/usr/src/sys/sys/vmdrum.c

So there's something called drum in 3BSD. Haven't chased down the MAKEDEV
and config glue to turn it into /dev/drum, but it's enough to support the
'It originated in 3BSD'. It certainly wasn't in 32V since that had no
paging.

This was 1980. Drum memory stopped being a new thing in the early 70's. So
it was just recently obsolete. But its typical use was a very small, but
very fast, hard drive to swap things to. There never was a drum device, at
least a commercial, non-lab experiment, for the VAXen. They all swapped to
spinning disks by then.

Warner

        - Dan C.
>
>
> On Fri, Apr 20, 2018 at 12:00 PM, David Collantes <david at collantes.us>
> wrote:
>
>> I found a Wikipedia[0] entry for it.
>>
>> [0] https://en.m.wikipedia.org/wiki/Drum_memory
>> <https://en.m.wikipedia.org/wiki/Drum_memory?wprov=sfti1>
>>
>> --
>> David Collantes
>> +1-407-484-7171
>>
>> On Apr 20, 2018, at 11:02, Tim Bradshaw <tfb at tfeb.org> wrote:
>>
>> I am sure I remember a machine which had this (which would have been
>> running a BSD 4.2 port).  Is my memory right, and what was it for
>> (something related to swap?)?
>>
>> It is stupidly hard to search for (or, alternatively, there are just no
>> hits and the memory is false).
>>
>> --tim
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180420/520e35dd/attachment.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-20 16:12   ` Dan Cross
@ 2018-04-20 16:21     ` Clem Cole
  2018-04-20 16:33     ` Warner Losh
  2018-04-22 17:01     ` Lars Brinkhoff
  2 siblings, 0 replies; 105+ messages in thread
From: Clem Cole @ 2018-04-20 16:21 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1366 bytes --]

On Fri, Apr 20, 2018 at 12:12 PM, Dan Cross <crossd at gmail.com> wrote:

> That's a bit different. It's possible that some early Unix machines had
> actual drum devices for storage or swap (did any of them?), but the
> /dev/drum device is what Clem says it was.
>
> It's funny, I just happened across this a couple of days ago when I went
> looking for the `hier.7` man page from 4.4BSD-Lite2:
>
> https://www.freebsd.org/cgi/man.cgi?query=hier&apropos=0&sek
> tion=7&manpath=4.4BSD+Lite2&arch=default&format=html
>
> It refers to this: https://www.freebsd.org/cgi/man.cgi?query=drum&sektion
> =4&apropos=0&manpath=4.4BSD+Lite2
>
> The claim is that it came from 3.0BSD.
>
​yes - I believe that is true,​  I just looked at a 4.1 manual it 's
definitely there,




> ​​
> Why was it called drum?
>
​wnj was being 'cute' -- drum's were historically ​the device large systems
paged too.  So people understood the reference at the time.



> I imagine that's historical license coupled with grad student imagination,
> but I'm curious if it has origin in actual hardware used at UC Berkeley.
> Clem, that was roughly your era, was it not?
>
​Yes - very much my era,

Clem​
ᐧ
ᐧ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180420/085432d4/attachment-0001.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-20 16:00 ` David Collantes
@ 2018-04-20 16:12   ` Dan Cross
  2018-04-20 16:21     ` Clem Cole
                       ` (2 more replies)
  0 siblings, 3 replies; 105+ messages in thread
From: Dan Cross @ 2018-04-20 16:12 UTC (permalink / raw)


That's a bit different. It's possible that some early Unix machines had
actual drum devices for storage or swap (did any of them?), but the
/dev/drum device is what Clem says it was.

It's funny, I just happened across this a couple of days ago when I went
looking for the `hier.7` man page from 4.4BSD-Lite2:

https://www.freebsd.org/cgi/man.cgi?query=hier&apropos=0&sektion=7&manpath=4.4BSD+Lite2&arch=default&format=html

It refers to this:
https://www.freebsd.org/cgi/man.cgi?query=drum&sektion=4&apropos=0&manpath=4.4BSD+Lite2

The claim is that it came from 3.0BSD. Why was it called drum? I imagine
that's historical license coupled with grad student imagination, but I'm
curious if it has origin in actual hardware used at UC Berkeley. Clem, that
was roughly your era, was it not?

        - Dan C.


On Fri, Apr 20, 2018 at 12:00 PM, David Collantes <david at collantes.us>
wrote:

> I found a Wikipedia[0] entry for it.
>
> [0] https://en.m.wikipedia.org/wiki/Drum_memory
> <https://en.m.wikipedia.org/wiki/Drum_memory?wprov=sfti1>
>
> --
> David Collantes
> +1-407-484-7171
>
> On Apr 20, 2018, at 11:02, Tim Bradshaw <tfb at tfeb.org> wrote:
>
> I am sure I remember a machine which had this (which would have been
> running a BSD 4.2 port).  Is my memory right, and what was it for
> (something related to swap?)?
>
> It is stupidly hard to search for (or, alternatively, there are just no
> hits and the memory is false).
>
> --tim
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180420/bc301283/attachment.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-20 15:02 Tim Bradshaw
  2018-04-20 15:58 ` Clem Cole
@ 2018-04-20 16:00 ` David Collantes
  2018-04-20 16:12   ` Dan Cross
  1 sibling, 1 reply; 105+ messages in thread
From: David Collantes @ 2018-04-20 16:00 UTC (permalink / raw)


I found a Wikipedia[0] entry for it. 

[0] https://en.m.wikipedia.org/wiki/Drum_memory

-- 
David Collantes
+1-407-484-7171

> On Apr 20, 2018, at 11:02, Tim Bradshaw <tfb at tfeb.org> wrote:
> 
> I am sure I remember a machine which had this (which would have been running a BSD 4.2 port).  Is my memory right, and what was it for (something related to swap?)?
> 
> It is stupidly hard to search for (or, alternatively, there are just no hits and the memory is false).
> 
> --tim
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180420/413e4043/attachment.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
  2018-04-20 15:02 Tim Bradshaw
@ 2018-04-20 15:58 ` Clem Cole
  2018-04-20 16:00 ` David Collantes
  1 sibling, 0 replies; 105+ messages in thread
From: Clem Cole @ 2018-04-20 15:58 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 760 bytes --]

Yes - its the name of the paging device for the system.   It allowed
interleaved paging across multiple disk drives, so you could see and
indirect look driver to multiple drives.   It should be in section 4 of the
BSD manual
ᐧ

On Fri, Apr 20, 2018 at 11:02 AM, Tim Bradshaw <tfb at tfeb.org> wrote:

> I am sure I remember a machine which had this (which would have been
> running a BSD 4.2 port).  Is my memory right, and what was it for
> (something related to swap?)?
>
> It is stupidly hard to search for (or, alternatively, there are just no
> hits and the memory is false).
>
> --tim
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180420/86ae4749/attachment.html>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [TUHS] /dev/drum
@ 2018-04-20 15:02 Tim Bradshaw
  2018-04-20 15:58 ` Clem Cole
  2018-04-20 16:00 ` David Collantes
  0 siblings, 2 replies; 105+ messages in thread
From: Tim Bradshaw @ 2018-04-20 15:02 UTC (permalink / raw)


I am sure I remember a machine which had this (which would have been running a BSD 4.2 port).  Is my memory right, and what was it for (something related to swap?)?

It is stupidly hard to search for (or, alternatively, there are just no hits and the memory is false).

--tim


^ permalink raw reply	[flat|nested] 105+ messages in thread

end of thread, other threads:[~2018-05-08 22:39 UTC | newest]

Thread overview: 105+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <mailman.1.1525744802.16322.tuhs@minnie.tuhs.org>
2018-05-08 22:39 ` [TUHS] /dev/drum Johnny Billquist
2018-05-07 15:36 Noel Chiappa
  -- strict thread matches above, loose matches on Subject: below --
2018-05-06 13:07 Noel Chiappa
2018-05-06 15:57 ` Johnny Billquist
2018-05-05 13:06 Noel Chiappa
2018-05-05 20:53 ` Johnny Billquist
2018-05-03 11:39 Noel Chiappa
2018-05-03 21:22 ` Johnny Billquist
2018-04-30 15:05 Noel Chiappa
2018-04-30 16:43 ` Paul Winalski
2018-04-30 21:41 ` Johnny Billquist
2018-05-03  2:54 ` Charles Anthony
2018-04-28 20:40 Noel Chiappa
2018-04-29 15:37 ` Johnny Billquist
2018-04-29 16:34   ` Steve Nickolas
2018-04-29 16:48     ` Warner Losh
2018-04-28  0:19 Noel Chiappa
2018-04-28 10:41 ` Johnny Billquist
2018-04-28 12:19   ` Rico Pajarola
2018-04-27 23:01 Noel Chiappa
2018-04-27 23:10 ` Johnny Billquist
2018-04-27 23:39   ` Warner Losh
     [not found] <mailman.1.1524708001.6296.tuhs@minnie.tuhs.org>
2018-04-27 22:41 ` Johnny Billquist
2018-04-26  1:53 Noel Chiappa
     [not found] <mailman.143.1524696952.3788.tuhs@minnie.tuhs.org>
2018-04-25 23:08 ` Johnny Billquist
2018-04-25 22:55 Noel Chiappa
     [not found] <mailman.139.1524690859.3788.tuhs@minnie.tuhs.org>
2018-04-25 22:54 ` Johnny Billquist
2018-04-25 22:46 Noel Chiappa
     [not found] <mailman.137.1524667148.3788.tuhs@minnie.tuhs.org>
2018-04-25 21:43 ` Johnny Billquist
2018-04-25 22:24 ` Johnny Billquist
2018-04-26  5:51   ` Tom Ivar Helbekkmo
2018-04-25 22:37 ` Johnny Billquist
2018-04-24  7:10 Rudi Blom
     [not found] <mailman.125.1524526228.3788.tuhs@minnie.tuhs.org>
2018-04-23 23:44 ` Johnny Billquist
2018-04-23 23:57   ` Steve Nickolas
2018-04-24  0:24   ` Ronald Natalie
2018-04-24  0:25   ` Warren Toomey
2018-04-24  0:31     ` Dave Horsfall
2018-04-24  1:02   ` Lyndon Nerenberg
2018-04-24  4:32   ` Grant Taylor
2018-04-24  4:49     ` Bakul Shah
2018-04-24  4:59       ` Warner Losh
2018-04-24  6:22         ` Bakul Shah
2018-04-24 14:57           ` Warner Losh
2018-04-24  6:46   ` Lars Brinkhoff
2018-04-23 22:01 Noel Chiappa
2018-04-23 22:09 ` Warner Losh
2018-04-23 18:41 Noel Chiappa
2018-04-23 19:09 ` Clem Cole
2018-04-23 23:01 ` Dave Horsfall
2018-04-23 23:49   ` Dave Horsfall
2018-04-24  0:26   ` Ronald Natalie
2018-04-22 18:06 Norman Wilson
2018-04-20 16:45 Noel Chiappa
2018-04-20 16:53 ` Charles Anthony
2018-04-20 17:16 ` William Pechter
2018-04-20 23:35   ` Dave Horsfall
2018-04-22 11:48     ` Steve Simon
2018-04-20 15:02 Tim Bradshaw
2018-04-20 15:58 ` Clem Cole
2018-04-20 16:00 ` David Collantes
2018-04-20 16:12   ` Dan Cross
2018-04-20 16:21     ` Clem Cole
2018-04-20 16:33     ` Warner Losh
2018-04-20 19:17       ` Ron Natalie
2018-04-20 20:23         ` Clem Cole
2018-04-20 22:10         ` Dave Horsfall
2018-04-22 17:01     ` Lars Brinkhoff
2018-04-22 17:37       ` Clem Cole
2018-04-22 19:14         ` Bakul Shah
2018-04-22 20:58         ` Mutiny
2018-04-22 22:37           ` Clem cole
2018-04-22 21:51         ` Dave Horsfall
2018-04-25  1:27           ` Dan Stromberg
2018-04-25 12:18             ` Ronald Natalie
2018-04-25 13:39               ` Tim Bradshaw
2018-04-25 14:02                 ` arnold
2018-04-25 14:59                   ` tfb
2018-04-25 14:33               ` Ian Zimmerman
2018-04-25 14:46                 ` Larry McVoy
2018-04-25 15:03                   ` ron minnich
2018-04-25 20:29               ` Paul Winalski
2018-04-25 20:45                 ` Larry McVoy
2018-04-25 21:14                   ` Lawrence Stewart
2018-04-25 21:30                     ` ron minnich
2018-04-25 23:01                 ` Bakul Shah
2018-04-23 16:42         ` Tim Bradshaw
2018-04-23 17:30           ` Ron Natalie
2018-04-23 17:51             ` Clem Cole
2018-04-23 18:30               ` Ron Natalie
2018-04-25 14:02                 ` Tom Ivar Helbekkmo
2018-04-25 14:38                   ` Clem Cole
2018-04-23 20:47               ` Grant Taylor
2018-04-23 21:06                 ` Clem Cole
2018-04-23 21:14                   ` Dan Mick
2018-04-23 21:27                     ` Clem Cole
2018-04-23 22:07                 ` Tim Bradshaw
2018-04-23 22:15                   ` Warner Losh
2018-04-23 23:30                     ` Grant Taylor
2018-04-24  9:37                       ` Michael Kjörling
2018-04-24  9:57                         ` Dave Horsfall
2018-04-24 13:01                           ` Nemo
2018-04-24 13:03                         ` Arthur Krewat
2018-04-23 23:45                   ` Arthur Krewat
2018-04-24  8:05                     ` tfb

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).