The Unix Heritage Society mailing list
 help / color / mirror / Atom feed
* [TUHS] /dev/drum
@ 2018-04-27 23:01 Noel Chiappa
  2018-04-27 23:10 ` Johnny Billquist
  0 siblings, 1 reply; 105+ messages in thread
From: Noel Chiappa @ 2018-04-27 23:01 UTC (permalink / raw)


    > From: Johnny Billquist

    >> Well, the 1972 edition of the -11/45 processor handbook
                   ^^

    > It would be nice if you actually could point out where this is the
    > case. I just went through that 1973 PDP-11/45 handbook
                                       ^^

Yes, the '73 one (red/purple cover) had switched. It's only the '72 one
(red/white cover) that says 'segments'.

	   Noel


^ permalink raw reply	[flat|nested] 105+ messages in thread
[parent not found: <mailman.1.1525744802.16322.tuhs@minnie.tuhs.org>]
* [TUHS] /dev/drum
@ 2018-05-07 15:36 Noel Chiappa
  0 siblings, 0 replies; 105+ messages in thread
From: Noel Chiappa @ 2018-05-07 15:36 UTC (permalink / raw)


    > From: Johnny Billquist

    > My point being that ... pages are invisible to the process segments are
    > very visible. And here we talk from a hardware point of view.

So you're saying 'segmentation means instructions explicitly include segment
numbers, and the address space is a two-dimensional array', or 'segmentation
means pointers explicitly include segment numbers', or something like that?

I seem to recall machines where that wasn't so, but I don't have the time to
look for them. Maybe the IBM System 38? The two 'spaces' in the KA10/KI10,
although a degenerate case (fewer even than the PDP-11) are one example.

I'm more interested in the semantics that are provided, not bits in
instructions.

It's true that with a large address space, one can sort of simulate
segmentation. To me, machines which explicitly have segment numbers in
instructions/pointers are one end of a spectrum of 'segmented machines', but
that's not a strict requirement. I'm more concerned about how they are used,
what the system/user gets.

Similarly for paging - fixed sizes (or a small number of sizes) are part of
the definition, but I'm more interested in how it's used - for demand loading,
and to simplify main memory allocation purposes, etc.


    >> the semantics available for - and _visible_ to - the user are
    >> constrained by the mechanisms of the underlying hardware.

    > That is not the same thing as being visible.

It doesn't meet the definition above ('segment numbers in
instructions/pointers'), no. But I don't accept that definition.


    > All of this is so similar to mmap() that we could in fact be having this
    > exact discussion based on mmap() instead .. I don't see you claiming
    > that every machine use a segmented model

mmap() (and similar file->address space mapping mechanisms, which a bunch of
OS's have supported - TENEX/TOP-20, ITS, etc) are interesting, but to me,
orthagonal - although it clearly needs support from memory management hardware.

And one can add 'sharing memory between two processes' here, too; very similar
_mechanisms_ to mmap(), but different goals. (Although I suppose two processes
could map the same area of a file, and that would give them IPC mapping.)

	   Noel


^ permalink raw reply	[flat|nested] 105+ messages in thread
* [TUHS] /dev/drum
@ 2018-05-06 13:07 Noel Chiappa
  2018-05-06 15:57 ` Johnny Billquist
  0 siblings, 1 reply; 105+ messages in thread
From: Noel Chiappa @ 2018-05-06 13:07 UTC (permalink / raw)


    > From: Johnny Billquist

    >> "A logical segment is a piece of contiguous memory, 32 to 32K 16-bit
    >> words long [which can grow in increments of 32 words]"

    > But then it is not actually giving programs direct access and
    > manipulation of the hardware. It is a software construct and service
    > offered by the OS, and the OS might fiddly around with various hardware
    > to give this service.

I don't understand how this is significant: most time-sharing OS's don't give
the users access to the memory management control hardware?

    > So the hardware is totally invisible after all.

Not quite - the semantics available for - and _visible_ to - the user are
constrained by the mechanisms of the underlying hardware.

Consider a machine with a KT11-B - it could not provide support for very small
segments, or be able to adjust the segment size with such small quanta. On the
other side, the KT11-B could support starting a 'software segment' at any
512-byte boundary in the virtual address space, unlike the KT11-C which only
supports 8KB boundaries.

    Noel


^ permalink raw reply	[flat|nested] 105+ messages in thread
* [TUHS] /dev/drum
@ 2018-05-05 13:06 Noel Chiappa
  2018-05-05 20:53 ` Johnny Billquist
  0 siblings, 1 reply; 105+ messages in thread
From: Noel Chiappa @ 2018-05-05 13:06 UTC (permalink / raw)


    > From: Johnny Billquist

    >> in MERT 'segments' (called that) were a basic system primitive, which
    >> users had access to.

    > the OS gives you some construct which can easily be mapped on to the
    > hardware.

Right. "A logical segment is a piece of contiguous memory, 32 to 32K 16-bit
words long ... Associated with each segment are an internal segment
identifiern and an optional global name." So it's clear how that maps onto the
PDP-11 memory management hardware - and a MERT 'segment' might use more than
one 'chunk'.


    >> I understand your definitions, and like breaking things up into
    >> 'virtual addressing' (which I prefer as the term, see below),
    >> 'non-residence' or 'demand loaded', and 'paging' (breaking into
    >> smallish, equal-sized chunks), but the problem with using "virtual
    >> memory" as a term for the first is that to most people, that term
    >> already has a meaning - the combination of all three.

Actually, after some research, it turns out to be only the first two. But I
digress...

    > It's actually not my definition. Demand paging is a term that have been
    > used for this for the last 40 years, and is not something there is much
    > contention about.

I wasn't talking about "demand paging", but rather your use of the term
"virtual memory":

    >>> Virtual memory is just *virtual* memory. It's not "real" or physical
    >>> in the sense that it has a dedicated location in physical memory
    >>> ... Instead, each process has its own memory, which might be mapped
    >>> somewhere in physical memory, but it might also not be.  And one
    >>> processes address 0 is not the same as another processes address
    >>> 0. They both have the illusion that they have the full memory address
    >>> range to them selves, unaware of the fact that there are many
    >>> processes who also have that same illusion.

I _like_ having an explicit term for the _concept_ you're describing there; I
just had a problem with the use of the _term_ "virtual memory" for it - since
that term already has a different meaning to many people.

Try Googling "virtual memory" and you turn up things like this: "compensate
for physical memory shortages by temporarily transferring data from RAM to
disk". Which is why I proposed calling it "virtual addressing" instead.

    > I must admit that I'm rather surprised if the term really is unknown to
    > you.

No, of course I am familiar with "demand paging".


Anyway, this conversation has been very helpful in clarifying my thinking
about virtual memory/paging. I have updated the CHWiki article based on it:

  http://gunkies.org/wiki/Virtual_memory

including the breakdown into three separate (but related) concepts: i) virtual
addressing, ii) demand loading, and iii) paging. I'd be interested in any
comments people have.


    > Which also begs the question - was there also a RK11-A?

One assumes there much have been RK11-A's and -B's, otherwise they wouldn't
have gotten to RK11-C... :-)

I have no idea if both existed in physical form - one might have been just a
design exercise). However, the photo of the non-RK11-C indicator panel
confirms that at least one of them was actually implemented.


    > And the "chunks" on a PDP-11, running Unix, RSX or RSTS/E, or something
    > similar is also totally invisible.

Right, but not under MERT - although there clearly a single 'software' segment
might use more than one set of physical 'chunks'.

Actuallly, Unix is _somewhat_ similar, in that processes always have separate
stack and text/data 'areas' (they don't call them 'segments', as far as I
could see) - and separate text and data 'areas' too, when pure code is in
use; and any area might use more than one 'chunk'.

The difference is that Unix doesn't support 'segments' as an OS primitive, the
way MERT does.

       Noel


^ permalink raw reply	[flat|nested] 105+ messages in thread
* [TUHS] /dev/drum
@ 2018-05-03 11:39 Noel Chiappa
  2018-05-03 21:22 ` Johnny Billquist
  0 siblings, 1 reply; 105+ messages in thread
From: Noel Chiappa @ 2018-05-03 11:39 UTC (permalink / raw)


    > That would be a pretty ugly way to look at the world.

'Beauty is in the eye of the beholder', and all that! :-)

    > Not to mention that one segment silently slides over into the next one
    > if it's more than 8K.

Again, precedent; IIRC, on the GE-645 Multics, segments were limited to 2^N-1 pages,
precisely because otherwise incrementing an inter-segment pointer could march off
the end of one, and into the next! (The -645 was implemented as a 'bag on the side'
of the non-segmented -635, so things like this were somewhat inevitable.)


    > wouldn't you say that the "chunks" on a PDP-11 are invisible to the
    > user? Unless you are the kernel of course. Or run without protection.

No, in MERT 'segments' (called that) were a basic system primitive, which
users had access to. (Very cool system! I really need to get moving on trying
to recover that...)


    > *Demand* paging is definitely a separate concept from virtual memory.

Hmmm. I understand your definitions, and like breaking things up into 'virtual
addressing' (which I prefer as the term, see below), 'non-residence' or
'demand loaded', and 'paging' (breaking into smallish, equal-sized chunks),
but the problem with using "virtual memory" as a term for the first is that to
most people, that term already has a meaning - the combination of all three.

(I have painful memories of this sort of thing - the term 'locator' was
invented after we gave up trying to convince people one could have a network
architecture in which not all packets contained addresses. That caused a major
'does not compute' fault in most people's brains! And 'locator' has since been
perverted from its original definition. But I digress.)


    > There is no real connection between virtual memory and memory
    > protection. One can exist with or without the other.

Virtual addressing and memory protection; yes, no connection. (Although the
former will often give you the latter - if process A can't see, or name, 
process B's memory, it can't damage it.)


    > Might have been just some internal early attempt that never got out of DEC?

Could be; something similar seems to have happened to the 'RK11-B':

  http://gunkies.org/wiki/RK11_disk_controller


    >> I don't have any problem with several different page sizes, _if it
    >> engineering sense to support them_.

    > So, would you then say that such machines do not have pages, but have 
    > segments?
    > Or where do you draw the line? Is it some function of how many different 
    > sized pages there can be before you would call it segments? ;-)

No, the number doesn't make a difference (to me). I'm trying to work out what
the key difference is; in part, it's that segments are first-class objects
which are visible to the user; paging is almost always hidden under the
sheets.

But not always; some OS's allow processes to share pages, or to map file pages
into address spaces, etc. Which does make it complex to separate the two..

     Noel


^ permalink raw reply	[flat|nested] 105+ messages in thread
* [TUHS] /dev/drum
@ 2018-04-30 15:05 Noel Chiappa
  2018-04-30 16:43 ` Paul Winalski
                   ` (2 more replies)
  0 siblings, 3 replies; 105+ messages in thread
From: Noel Chiappa @ 2018-04-30 15:05 UTC (permalink / raw)


    > From: Johnny Billquist

    > This is where I disagree. The problem is that the chunks in the PDP-11
    > do not describe things from a zero offset, while a segment do. Only
    > chunk 0 is describing addresses from a 0 offset. And exactly which chunk
    > is selected is based on the virtual address, and nothing else.

Well, you have something of a point, but... it depends on how you look at it.
If you think of a PDP-11 address as holding two concatenated fields (3 bits of
'segment' and 13 bits of 'offset within segment'), not so much.

IIRC there are other segmented machines that do things this way - I can't
recall the details of any off the top of my head. (Well, there the KA10/KI10,
with their writeable/write-protected 'chunks', but that's a bit of a
degenerate case. I'm sure there is some segmented machine that works that way,
but I can't recall it.)

BTW, this reminds me of another key differentiator between paging and
segments, which is that paging was originally _invisible_ to the user (except
for setting the total size of the process), whereas segmentation is explicitly
visible to the user.

I think there is at least one PDP-11 OS which makes the 'chunks' visible to
the user - MERT (and speaking of which, I need to get back on my project of
trying to track down source/documentation for it).


    > Demand paging really is a separate thing from virtual memory. It's a
    > very bad thing to try and conflate the two.

Really? I always worked on the basis that the two terms were synonyms - but
I'm very open to the possibility that there is a use to having them have
distinct meanings.

I see a definition of 'virtual memory' below, but what would you use for
'paging'?

Now that I think about it, there are actually _three_ concepts: 'virtual
memory', as you define it; what I will call 'non-residence' - i.e. a process
can run without _all_ of its virtual memory being present in physical memory;
and 'paging' - which I would define as 'use fixed-size blocks'. (The third is
more of an engineering thing, rather than high-level architecture, since it
means you never have to 'shuffle' core, as systems that used variable-sized
things seem to.)

'Non-residence' is actually orthogonal to 'paging'; I can imagine a paging
system which didn't support non-residence, and vice versa (either swapping
the entire virtual address space, or doing it a segment at a time if the
system has segments).

    > There is nothing about virtual memory that says that you do not have to
    > have all of your virtual memory mapped to physical memory when the
    > process is running.

True.

    > Virtual memory is just *virtual* memory. It's not "real" or physical in
    > the sense that it has a dedicated location in physical memory, which
    > would be the same for all processes talking about that memory
    > address. Instead, each process has its own memory, which might be mapped
    > somewhere in physical memory, but it might also not be.

OK so far.

    > each process would have to be aware of all the other processes that use
    > memory, and make sure that no two processes try to use the same memory,
    > or chaos ensues.

There's also the System 360 approach, where processes share a single address
space (physical memory - no virtual memory on them!), but it uses protection
keys on memory 'chunks' (not sure of the correct IBM term) to ensure that one
process can't tromp on another's memory.


    >> a memory management device for the PDP-11 which provided 'real' paging,
    >> the KT11-B?

    > have never read any technical details. Interesting read.

Yes, we were lucky to be able to retrieve detailed info on it! A PDP-11/20
sold on eBay with a fairly complete set of KT11-B documentation, and allegedly
a "KT11-B" as well, but alas, it turned out to 'only' be an RK11-C. Not that
RK11-C's aren't cool, but on the 'cool scale' they are like 3, whereas a
KT11-B would have been, like, 17! :-) Still, we managed to get the KT11-B
'manual' (such as it is) and prints online.

I'd love to find out equivalent detail for the KT11-A, but I've never seen
anything on it. (And I've appealed before for the KS11, which an early PDP-11
Unix apparently used, but no joy.)


    > But how do you then view modern architectures which have different sized
    > pages? Are they no longer pages then?

Actually, there is precedent for that. The original Multics hardware, the
GE-645, supported two page sizes. That was dropped in later machines (the
Honeywell 6000's) since it was decided that the extra complexity wasn't worth
it.

I don't have any problem with several different page sizes, _if it makes
engineering sense to support them_. (I assume that the rationale for their
re-introduction is that in the age of 64-bit machines, page tables for very
large 'chunks' can be very large if pages of ~1K or so are used, or something
like.)

It does make real memory allocation (one of the advantages of paging) more
difficult, since there would now be small and large page frames. Although I
suppose it wouldn't be hard to coalesce them, if there are only two sizes, and
one's a small power-of-2 multiple of the other - like 'fragments' in the
Berkeley Fast File System for BSD4.2.

I have a query, though - how does a system with two page sizes know which to
use? On Multics (and probably on the x86), it's a per-segment attribute. But
on a system with a large, flat address space, how does the system know which
parts of it are using small pages, and which large?

      Noel


^ permalink raw reply	[flat|nested] 105+ messages in thread
* [TUHS] /dev/drum
@ 2018-04-28 20:40 Noel Chiappa
  2018-04-29 15:37 ` Johnny Billquist
  0 siblings, 1 reply; 105+ messages in thread
From: Noel Chiappa @ 2018-04-28 20:40 UTC (permalink / raw)


    > From: Johnny Billquist

    > Gah. If I were to try and collect every copy made, it would be quite a
    > collection.

Well, just the 'processor handbook's (the little paperback things), I have
about 30. (If you add devices, that probably doubles it.) I think my
collection is complete.


    > So there was a total change in terminology early in the 11/45 life, it
    > would appear. I wonder why. ... I probably would not blame some market
    > droids.

I was joking, but also serious. I really do think it was most likely
marketing-driven. (See below for why I don't think it was engineering-driven,
which leaves....)

I wonder if there's anything in the DEC archives (a big chunk of which are now
at the CHM) which would shed any light? Some of the archives are online there,
e.g.:

  http://www.bitsavers.org/pdf/dec/pdp11/memos/

but it seems to be mostly engineering (although there's some that would be
characterized as marketing).


    > one of the most important differences between segmentation and pages are
    > that with segmentation you only have one contiguous range of memory,
    > described by a base and a length register. This will be a contiguous
    > range of memory both in virtual memory, and in physical memory.

I agree completely (although I extend it to multiple segments, each of which
has the characterstics you describe).

Which is why I think the original DEC nomenclature for the PDP-11's memory
management was more appropriate - the description above is _exactly_ the
functionality provided for each of the 8 'chunks' (to temporarily use a
neutral term) of PDP-11 address space, which don't quack like most other
'pages' (to use the 'if it quacks like a duck' standard).


One query I have comes from the usual goal of 'virtual memory' (which is the
concept most tightly associated with 'pages'), which is to allow a process to
run without all of its pages in physical memory.

I don't know much about PDP-11 DEC OS's, but do any of them do this? (I.e.
allow partial residency.)  If not, that would be ironic (in view of the later
name) - and, I think, evidence that the PDP-11 'chunks' aren't really pages.


BTW, did you know that prior to the -11/45, there was a memory management
device for the PDP-11 which provided 'real' paging, the KT11-B? More here:

  http://gunkies.org/wiki/KT11-B_Paging_Option

I seem to recall some memos in the memo archive that discussed it; I _think_
it mentioned why they decided not to go that way in doing memory management
for the /45, but I forget the details? (Maybe the performance hit of keeping
the page tables in main memory was significant?)_


    > With segmentation you cannot have your virtual memory split up and
    > spread out over physical memory.

Err, Multics did that; the process' address space was split up into many
segments (a true 2-dimensional naming system, with 18 bits of segment number),
which were then split up into pages, for both virtual memory ('not all
resident'), and for physical memory allocation.

Although I suppose one could view that as two separate, sequential steps -
i.e. i) the division into segments, and ii) the division of segments into
pages. In fact, I take this approach in describing the Multics memory system,
since it's easier to understand as two independent things.

    > You can also have "holes" in your memory, with pages that are invalid,
    > yet have pages higher up in your memory .. Something that is impossible
    > with segmentation, since you only have one set of registers for each
    > memory type (at most) in a segmented memory implementation.

You seem to be thinking of segmentation a la Intel 8086, which is a hack they
added to allow use of more memory (although I suspect that PDP-11 chunks were
a hack of a similar flavour).

At the time we are speaking of, the Intel 8086 did not exist (it came along
quite few years later). The systems which supported segmentation, such as
Multics, the Burroughs 5000 and successors, etc had 'real' segmentation, with
a full two-dimensional naming system for memory. (Burroughs 5000 segment
numbers were 10 bits wide.)

    > I mean, when people talk about segmented memory, what most everyone
    > today thinks of is the x86 model, where all of this certainly is true.

It's also (IMNSHO) irrelevant to this. Intel's brain-damage is not the
entirety of computer science (or shouldn't be).

(BTW, later Intel xx86 machines did allow you have to 'holes' in segments, via
the per-segment page tables.)


    > it would be very wrong to call what the PDP-11 have segmentation

The problem is that what PDP-11 memory management does isn't really like
_either_ segmentation, or paging, as practised in other machines. With only 8
chunks, it's not like Multics etc, which have very large address spaces split
up into many segments. (And maybe _that_'s why the name was changed - when
people heard 'segments' they thought 'lots of them'.)

However, it's not like paging on effectively all other systems with paging,
because in them paging's used to provide virtual memory (in the sense of 'the
process runs with pages missing from real memory'), and to make memory
allocation simple by use of fixed-size page frames.

So any name given PDP-11 'chunks' is going to have _some_ problems. It just
thing 'segmentation' (as you defined it at the top) is a better fit than the
alternative...

   Noel


^ permalink raw reply	[flat|nested] 105+ messages in thread
* [TUHS] /dev/drum
@ 2018-04-28  0:19 Noel Chiappa
  2018-04-28 10:41 ` Johnny Billquist
  0 siblings, 1 reply; 105+ messages in thread
From: Noel Chiappa @ 2018-04-28  0:19 UTC (permalink / raw)


    > From: Johnny Billquist

    > For 1972 I only found the 11/40 handbook.

I have a spare copy of the '72 /45 handbook; send me your address, and I'll
send it along. (Every PDP-11 fan should have a copy of every edition of every
model's handbooks... :-)

In the meantime, I'm too lazy to scan the whole thing, but here's the first
page of Chapter 6 from the '72:

  http://ana-3.lcs.mit.edu/~jnc/tech/pdp11/jpg/tmp/PDP11145ProcHbook72pg6-1.jpg


    > went though the 1972 Maintenance Reference Manual for the 11/45. That
    > one also says "page". :-)

There are a few remnant relics of the 'segment' phase, e.g. here:

  http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/sys/conf/m45.s

which has this comment:

  / turn on segmentation

Also, if you look at the end, you'll see SSR0, SSR1 etc (as per the '72
handbook), instead of the later SR0, SR1, etc.

	Noel



^ permalink raw reply	[flat|nested] 105+ messages in thread
[parent not found: <mailman.1.1524708001.6296.tuhs@minnie.tuhs.org>]
* [TUHS] /dev/drum
@ 2018-04-26  1:53 Noel Chiappa
  0 siblings, 0 replies; 105+ messages in thread
From: Noel Chiappa @ 2018-04-26  1:53 UTC (permalink / raw)


    > From: Johnny Billquist

    > if you hadn't had the ability for them to be less than 8K, you wouldn't
    > even try that argument.

Well, the 1972 edition of the -11/45 processor handbook called them segments.. :-)

I figure some marketing droid found out that 'paging' was the new buzzword, and
changed the name... :-) :-)

	Noel


^ permalink raw reply	[flat|nested] 105+ messages in thread
[parent not found: <mailman.143.1524696952.3788.tuhs@minnie.tuhs.org>]
* [TUHS] /dev/drum
@ 2018-04-25 22:55 Noel Chiappa
  0 siblings, 0 replies; 105+ messages in thread
From: Noel Chiappa @ 2018-04-25 22:55 UTC (permalink / raw)


    > From: Johnny Billquist

    > PDP-11 have 8K pages.

Segments. :-) (This is an old argument between Johnny and me, I'm not trying
to re-open it, just yanking his chain... :-)


    > On a PDP-11, all your virtual memory was always there when the process
    > was on the CPU

In theory, at least (I don't know of an OS that made use of this), didn't the
memory management hardware allow the possibility to do demand-paging? I note
that Access Control Field value 0 is "non-resident".

Unix kinda-sorta used this stuff, to automatically extend the stack when the
user ran off the end of it (causing a trap).

    > you normally did not have demand paging, since that was not really
    > gaining you much on a PDP-11

Especially on the later machines, with more than 256KB of hardware main
memory. Maybe it might have been useful on the earlier ones (e.g. the -11/45).

	Noel


^ permalink raw reply	[flat|nested] 105+ messages in thread
[parent not found: <mailman.139.1524690859.3788.tuhs@minnie.tuhs.org>]
* [TUHS] /dev/drum
@ 2018-04-25 22:46 Noel Chiappa
  0 siblings, 0 replies; 105+ messages in thread
From: Noel Chiappa @ 2018-04-25 22:46 UTC (permalink / raw)


    > From: Johnny Billquist

    > I don't know exactly why DEC left the last three tracks unused. Might
    > have been for diagnostic tools to have a scratch area to play with.
    > Might have been that those tracks were found to be less reliable. Or
    > maybe something completely different. But it was not for bad block
    > replacement, as DEC didn't even do that on RK05

The "pdp11 peripherals handbook" (1975 edition at least, I can't be bothered
to check them all) says, for the RK11:

  "Tracks/surface: 200+3 spare"

and for the RP11:

  "Tracks/surface: 400 (plus 6 spares)"

which sounds like it could be for bad block replacement, but the RP11-C
Maintenance Manual says (pg. 3-10) "the inner-most cylinders 400-405 are only
used for maintenance".

Unix blithely ignored all that, and used every block available on both the
RK11 and RP11.

     Noel


^ permalink raw reply	[flat|nested] 105+ messages in thread
[parent not found: <mailman.137.1524667148.3788.tuhs@minnie.tuhs.org>]
* [TUHS] /dev/drum
@ 2018-04-24  7:10 Rudi Blom
  0 siblings, 0 replies; 105+ messages in thread
From: Rudi Blom @ 2018-04-24  7:10 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 2045 bytes --]

>Date: Mon, 23 Apr 2018 13:51:07 -0400
>From: Clem Cole <clemc at ccc.com>
>To: Ron Natalie <ron at ronnatalie.com>
>Cc: Tim Bradshaw <tfb at tfeb.org>, TUHS main list <tuhs at minnie.tuhs.org>
>Subject: Re: [TUHS] /dev/drum
>Message-ID:
>        <CAC20D2PEzAayjfaQN+->kQS=H7npcEZ_OKXL1ffPxak5b2ENv4Q at mail.gmail.com>
>Content-Type: text/plain; charset="utf-8"

... some stuff removed ...

>​Exactly...   For instance an RK04 was less that 5K blocks (4620 or some
>such - I've forgotten the actually amount).  The disk was mkfs'ed to the
>first 4K and the left over was give to the swap system.   By the time of
>4.X, the RP06 was 'partitioned' into 'rings' (some overlapping).  The 'a'.
>partition was root, the 'b' was swap and one fo the others was the rest.
>Later the 'c' was a short form for copying the entire disk.

Wondered why, but I guess now I know that's the reason Digital UNIX on
alpha used the same disk layout. From a AlphaServer DS10 running
DU4.0g, output "disklabel -r rz16a"

# /dev/rrz16a:
type: SCSI
disk: BB009222
label:
flags:
bytes/sector: 512
sectors/track: 168
tracks/cylinder: 20
sectors/cylinder: 3360
cylinders: 5273
sectors/unit: 17773524
rpm: 7200
interleave: 1
trackskew: 66
cylinderskew: 83
headswitch: 0		# milliseconds
track-to-track seek: 0	# milliseconds
drivedata: 0

8 partitions:
#          size     offset    fstype   [fsize bsize   cpg]      #
NOTE: values not exact
  a:     524288          0     AdvFS                    	# (Cyl.    0 - 156*)
  b:    1572864     524288      swap                    	# (Cyl.  156*- 624*)
  c:   17773524          0    unused        0     0       	# (Cyl.    0 - 5289*)
  d:          0          0    unused        0     0       	# (Cyl.    0 - -1)
  e:          0          0    unused        0     0       	# (Cyl.    0 - -1)
  f:          0          0    unused        0     0       	# (Cyl.    0 - -1)
  g:    4194304    2097152     AdvFS                    	# (Cyl.  624*- 1872*)
  h:   11482068    6291456     AdvFS                    	# (Cyl. 1872*- 5289*)


^ permalink raw reply	[flat|nested] 105+ messages in thread
[parent not found: <mailman.125.1524526228.3788.tuhs@minnie.tuhs.org>]
* [TUHS] /dev/drum
@ 2018-04-23 22:01 Noel Chiappa
  2018-04-23 22:09 ` Warner Losh
  0 siblings, 1 reply; 105+ messages in thread
From: Noel Chiappa @ 2018-04-23 22:01 UTC (permalink / raw)


    > From: Clem Cole

    > To be honest, I really don't remember - but I know we used letters for
    > the different partitions on the 11/70 before BSD showed up.

In V6 (and probably before that, too), it was numbers:

  http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/man/man4/rp.4

So on my machine which had 2 x 50MB CalChomps, with a Diva controller, which
we had to split up into two partition each (below), they were dv00, dv01, dv10
and dv11. Letters for the partitions made it easier...

    > The reason for the partition originally was (and it must have been 6th
    > edition when I first saw it), DEC finally made a disk large enough that
    > number of blocks overflowed a 16 bit integer.  So splitting the disk
    > into smaller partitions allowed the original seek(2) to work without
    > overflow.

No, in V6 filesystems, block numbers (in inodes, etc - also the file system
size in the superblock) were only 16 bits, so a 50MB disk (100K blocks) had to
be split up into partitions to use it all. True of the RP03/04 in V6 too (see
the man page above).

	Noel


^ permalink raw reply	[flat|nested] 105+ messages in thread
* [TUHS] /dev/drum
@ 2018-04-23 18:41 Noel Chiappa
  2018-04-23 19:09 ` Clem Cole
  2018-04-23 23:01 ` Dave Horsfall
  0 siblings, 2 replies; 105+ messages in thread
From: Noel Chiappa @ 2018-04-23 18:41 UTC (permalink / raw)


    > From: "Ron Natalie"

    > I'm pretty sure that swapping in V6 took place to a major/minor number
    > configured at kernel build time.

Yup, in c.c, along with the block/character device switches (which converted
major device numbers to routines).

    > You could create a dev node for the swap device, but it wasn't used for
    > the actual swapping.

Yes.

    > We actually dedicated a full 1024 block RF11 fixed head to the system in
    > the early days

Speaking of fixed-head disks, one of the Bell systems used (IIRC) an RS04
fixed-head disk for the root. DEC apparently only used that disk for swapping
in their OS's... So the DEC diagnsotics felt free to scribble on the disk.
So, Field Circus comes in to work on the machine... Ooops!

    Noel



^ permalink raw reply	[flat|nested] 105+ messages in thread
* [TUHS] /dev/drum
@ 2018-04-22 18:06 Norman Wilson
  0 siblings, 0 replies; 105+ messages in thread
From: Norman Wilson @ 2018-04-22 18:06 UTC (permalink / raw)


Clem Cole:

  On the other hand, we still 'dump core' and use the core files for
  debugging.  So, while the term 'drum' lost its meaning, 'core file' - might
  be considered 'quaint' by todays hacker, it still has meaning.

====

Just as we still speak of dialling and hanging up the phone,
electing Presidents, and other actions long since made obsolete
by changes of technology and culture.

Norman Wilson
Toronto ON


^ permalink raw reply	[flat|nested] 105+ messages in thread
* [TUHS] /dev/drum
@ 2018-04-20 16:45 Noel Chiappa
  2018-04-20 16:53 ` Charles Anthony
  2018-04-20 17:16 ` William Pechter
  0 siblings, 2 replies; 105+ messages in thread
From: Noel Chiappa @ 2018-04-20 16:45 UTC (permalink / raw)


    > From: Warner Losh

    > Drum memory stopped being a new thing in the early 70's.

Mid 60's. Fixed-head disks replaced them - same basic concept, same amount of
bits, less physical volume. Those lasted until the late 70's - early PDP-11
Unixes have drivers for the RF11 and RS0x fixed-head disks.

The 'fire-hose' drum on the GE 645 Multics was the last one I've heard
of. Amusing story about it here:

  http://www.multicians.org/low-bottle-pressure.html

Although reading it, it may not have been (physically) a drum.

    > There never was a drum device, at least a commercial, non-lab
    > experiment, for the VAXen. They all swapped to spinning disks by then.

s/spinning/non-fixed-head/.

	Noel


^ permalink raw reply	[flat|nested] 105+ messages in thread
* [TUHS] /dev/drum
@ 2018-04-20 15:02 Tim Bradshaw
  2018-04-20 15:58 ` Clem Cole
  2018-04-20 16:00 ` David Collantes
  0 siblings, 2 replies; 105+ messages in thread
From: Tim Bradshaw @ 2018-04-20 15:02 UTC (permalink / raw)


I am sure I remember a machine which had this (which would have been running a BSD 4.2 port).  Is my memory right, and what was it for (something related to swap?)?

It is stupidly hard to search for (or, alternatively, there are just no hits and the memory is false).

--tim


^ permalink raw reply	[flat|nested] 105+ messages in thread

end of thread, other threads:[~2018-05-08 22:39 UTC | newest]

Thread overview: 105+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-27 23:01 [TUHS] /dev/drum Noel Chiappa
2018-04-27 23:10 ` Johnny Billquist
2018-04-27 23:39   ` Warner Losh
     [not found] <mailman.1.1525744802.16322.tuhs@minnie.tuhs.org>
2018-05-08 22:39 ` Johnny Billquist
  -- strict thread matches above, loose matches on Subject: below --
2018-05-07 15:36 Noel Chiappa
2018-05-06 13:07 Noel Chiappa
2018-05-06 15:57 ` Johnny Billquist
2018-05-05 13:06 Noel Chiappa
2018-05-05 20:53 ` Johnny Billquist
2018-05-03 11:39 Noel Chiappa
2018-05-03 21:22 ` Johnny Billquist
2018-04-30 15:05 Noel Chiappa
2018-04-30 16:43 ` Paul Winalski
2018-04-30 21:41 ` Johnny Billquist
2018-05-03  2:54 ` Charles Anthony
2018-04-28 20:40 Noel Chiappa
2018-04-29 15:37 ` Johnny Billquist
2018-04-29 16:34   ` Steve Nickolas
2018-04-29 16:48     ` Warner Losh
2018-04-28  0:19 Noel Chiappa
2018-04-28 10:41 ` Johnny Billquist
2018-04-28 12:19   ` Rico Pajarola
     [not found] <mailman.1.1524708001.6296.tuhs@minnie.tuhs.org>
2018-04-27 22:41 ` Johnny Billquist
2018-04-26  1:53 Noel Chiappa
     [not found] <mailman.143.1524696952.3788.tuhs@minnie.tuhs.org>
2018-04-25 23:08 ` Johnny Billquist
2018-04-25 22:55 Noel Chiappa
     [not found] <mailman.139.1524690859.3788.tuhs@minnie.tuhs.org>
2018-04-25 22:54 ` Johnny Billquist
2018-04-25 22:46 Noel Chiappa
     [not found] <mailman.137.1524667148.3788.tuhs@minnie.tuhs.org>
2018-04-25 21:43 ` Johnny Billquist
2018-04-25 22:24 ` Johnny Billquist
2018-04-26  5:51   ` Tom Ivar Helbekkmo
2018-04-25 22:37 ` Johnny Billquist
2018-04-24  7:10 Rudi Blom
     [not found] <mailman.125.1524526228.3788.tuhs@minnie.tuhs.org>
2018-04-23 23:44 ` Johnny Billquist
2018-04-23 23:57   ` Steve Nickolas
2018-04-24  0:24   ` Ronald Natalie
2018-04-24  0:25   ` Warren Toomey
2018-04-24  0:31     ` Dave Horsfall
2018-04-24  1:02   ` Lyndon Nerenberg
2018-04-24  4:32   ` Grant Taylor
2018-04-24  4:49     ` Bakul Shah
2018-04-24  4:59       ` Warner Losh
2018-04-24  6:22         ` Bakul Shah
2018-04-24 14:57           ` Warner Losh
2018-04-24  6:46   ` Lars Brinkhoff
2018-04-23 22:01 Noel Chiappa
2018-04-23 22:09 ` Warner Losh
2018-04-23 18:41 Noel Chiappa
2018-04-23 19:09 ` Clem Cole
2018-04-23 23:01 ` Dave Horsfall
2018-04-23 23:49   ` Dave Horsfall
2018-04-24  0:26   ` Ronald Natalie
2018-04-22 18:06 Norman Wilson
2018-04-20 16:45 Noel Chiappa
2018-04-20 16:53 ` Charles Anthony
2018-04-20 17:16 ` William Pechter
2018-04-20 23:35   ` Dave Horsfall
2018-04-22 11:48     ` Steve Simon
2018-04-20 15:02 Tim Bradshaw
2018-04-20 15:58 ` Clem Cole
2018-04-20 16:00 ` David Collantes
2018-04-20 16:12   ` Dan Cross
2018-04-20 16:21     ` Clem Cole
2018-04-20 16:33     ` Warner Losh
2018-04-20 19:17       ` Ron Natalie
2018-04-20 20:23         ` Clem Cole
2018-04-20 22:10         ` Dave Horsfall
2018-04-22 17:01     ` Lars Brinkhoff
2018-04-22 17:37       ` Clem Cole
2018-04-22 19:14         ` Bakul Shah
2018-04-22 20:58         ` Mutiny
2018-04-22 22:37           ` Clem cole
2018-04-22 21:51         ` Dave Horsfall
2018-04-25  1:27           ` Dan Stromberg
2018-04-25 12:18             ` Ronald Natalie
2018-04-25 13:39               ` Tim Bradshaw
2018-04-25 14:02                 ` arnold
2018-04-25 14:59                   ` tfb
2018-04-25 14:33               ` Ian Zimmerman
2018-04-25 14:46                 ` Larry McVoy
2018-04-25 15:03                   ` ron minnich
2018-04-25 20:29               ` Paul Winalski
2018-04-25 20:45                 ` Larry McVoy
2018-04-25 21:14                   ` Lawrence Stewart
2018-04-25 21:30                     ` ron minnich
2018-04-25 23:01                 ` Bakul Shah
2018-04-23 16:42         ` Tim Bradshaw
2018-04-23 17:30           ` Ron Natalie
2018-04-23 17:51             ` Clem Cole
2018-04-23 18:30               ` Ron Natalie
2018-04-25 14:02                 ` Tom Ivar Helbekkmo
2018-04-25 14:38                   ` Clem Cole
2018-04-23 20:47               ` Grant Taylor
2018-04-23 21:06                 ` Clem Cole
2018-04-23 21:14                   ` Dan Mick
2018-04-23 21:27                     ` Clem Cole
2018-04-23 22:07                 ` Tim Bradshaw
2018-04-23 22:15                   ` Warner Losh
2018-04-23 23:30                     ` Grant Taylor
2018-04-24  9:37                       ` Michael Kjörling
2018-04-24  9:57                         ` Dave Horsfall
2018-04-24 13:01                           ` Nemo
2018-04-24 13:03                         ` Arthur Krewat
2018-04-23 23:45                   ` Arthur Krewat
2018-04-24  8:05                     ` tfb

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).