* [TUHS] /dev/drum @ 2018-04-20 15:02 Tim Bradshaw 2018-04-20 15:58 ` Clem Cole 2018-04-20 16:00 ` David Collantes 0 siblings, 2 replies; 59+ messages in thread From: Tim Bradshaw @ 2018-04-20 15:02 UTC (permalink / raw) I am sure I remember a machine which had this (which would have been running a BSD 4.2 port). Is my memory right, and what was it for (something related to swap?)? It is stupidly hard to search for (or, alternatively, there are just no hits and the memory is false). --tim ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-20 15:02 [TUHS] /dev/drum Tim Bradshaw @ 2018-04-20 15:58 ` Clem Cole 2018-04-20 16:00 ` David Collantes 1 sibling, 0 replies; 59+ messages in thread From: Clem Cole @ 2018-04-20 15:58 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 760 bytes --] Yes - its the name of the paging device for the system. It allowed interleaved paging across multiple disk drives, so you could see and indirect look driver to multiple drives. It should be in section 4 of the BSD manual ᐧ On Fri, Apr 20, 2018 at 11:02 AM, Tim Bradshaw <tfb at tfeb.org> wrote: > I am sure I remember a machine which had this (which would have been > running a BSD 4.2 port). Is my memory right, and what was it for > (something related to swap?)? > > It is stupidly hard to search for (or, alternatively, there are just no > hits and the memory is false). > > --tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180420/86ae4749/attachment.html> ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-20 15:02 [TUHS] /dev/drum Tim Bradshaw 2018-04-20 15:58 ` Clem Cole @ 2018-04-20 16:00 ` David Collantes 2018-04-20 16:12 ` Dan Cross 1 sibling, 1 reply; 59+ messages in thread From: David Collantes @ 2018-04-20 16:00 UTC (permalink / raw) I found a Wikipedia[0] entry for it. [0] https://en.m.wikipedia.org/wiki/Drum_memory -- David Collantes +1-407-484-7171 > On Apr 20, 2018, at 11:02, Tim Bradshaw <tfb at tfeb.org> wrote: > > I am sure I remember a machine which had this (which would have been running a BSD 4.2 port). Is my memory right, and what was it for (something related to swap?)? > > It is stupidly hard to search for (or, alternatively, there are just no hits and the memory is false). > > --tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180420/413e4043/attachment.html> ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-20 16:00 ` David Collantes @ 2018-04-20 16:12 ` Dan Cross 2018-04-20 16:21 ` Clem Cole ` (2 more replies) 0 siblings, 3 replies; 59+ messages in thread From: Dan Cross @ 2018-04-20 16:12 UTC (permalink / raw) That's a bit different. It's possible that some early Unix machines had actual drum devices for storage or swap (did any of them?), but the /dev/drum device is what Clem says it was. It's funny, I just happened across this a couple of days ago when I went looking for the `hier.7` man page from 4.4BSD-Lite2: https://www.freebsd.org/cgi/man.cgi?query=hier&apropos=0&sektion=7&manpath=4.4BSD+Lite2&arch=default&format=html It refers to this: https://www.freebsd.org/cgi/man.cgi?query=drum&sektion=4&apropos=0&manpath=4.4BSD+Lite2 The claim is that it came from 3.0BSD. Why was it called drum? I imagine that's historical license coupled with grad student imagination, but I'm curious if it has origin in actual hardware used at UC Berkeley. Clem, that was roughly your era, was it not? - Dan C. On Fri, Apr 20, 2018 at 12:00 PM, David Collantes <david at collantes.us> wrote: > I found a Wikipedia[0] entry for it. > > [0] https://en.m.wikipedia.org/wiki/Drum_memory > <https://en.m.wikipedia.org/wiki/Drum_memory?wprov=sfti1> > > -- > David Collantes > +1-407-484-7171 > > On Apr 20, 2018, at 11:02, Tim Bradshaw <tfb at tfeb.org> wrote: > > I am sure I remember a machine which had this (which would have been > running a BSD 4.2 port). Is my memory right, and what was it for > (something related to swap?)? > > It is stupidly hard to search for (or, alternatively, there are just no > hits and the memory is false). > > --tim > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180420/bc301283/attachment.html> ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-20 16:12 ` Dan Cross @ 2018-04-20 16:21 ` Clem Cole 2018-04-20 16:33 ` Warner Losh 2018-04-22 17:01 ` Lars Brinkhoff 2 siblings, 0 replies; 59+ messages in thread From: Clem Cole @ 2018-04-20 16:21 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 1366 bytes --] On Fri, Apr 20, 2018 at 12:12 PM, Dan Cross <crossd at gmail.com> wrote: > That's a bit different. It's possible that some early Unix machines had > actual drum devices for storage or swap (did any of them?), but the > /dev/drum device is what Clem says it was. > > It's funny, I just happened across this a couple of days ago when I went > looking for the `hier.7` man page from 4.4BSD-Lite2: > > https://www.freebsd.org/cgi/man.cgi?query=hier&apropos=0&sek > tion=7&manpath=4.4BSD+Lite2&arch=default&format=html > > It refers to this: https://www.freebsd.org/cgi/man.cgi?query=drum&sektion > =4&apropos=0&manpath=4.4BSD+Lite2 > > The claim is that it came from 3.0BSD. > yes - I believe that is true, I just looked at a 4.1 manual it 's definitely there, > > Why was it called drum? > wnj was being 'cute' -- drum's were historically the device large systems paged too. So people understood the reference at the time. > I imagine that's historical license coupled with grad student imagination, > but I'm curious if it has origin in actual hardware used at UC Berkeley. > Clem, that was roughly your era, was it not? > Yes - very much my era, Clem ᐧ ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180420/085432d4/attachment-0001.html> ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-20 16:12 ` Dan Cross 2018-04-20 16:21 ` Clem Cole @ 2018-04-20 16:33 ` Warner Losh 2018-04-20 19:17 ` Ron Natalie 2018-04-22 17:01 ` Lars Brinkhoff 2 siblings, 1 reply; 59+ messages in thread From: Warner Losh @ 2018-04-20 16:33 UTC (permalink / raw) On Fri, Apr 20, 2018 at 10:12 AM, Dan Cross <crossd at gmail.com> wrote: > That's a bit different. It's possible that some early Unix machines had > actual drum devices for storage or swap (did any of them?), but the > /dev/drum device is what Clem says it was. > > It's funny, I just happened across this a couple of days ago when I went > looking for the `hier.7` man page from 4.4BSD-Lite2: > > https://www.freebsd.org/cgi/man.cgi?query=hier&apropos=0& > sektion=7&manpath=4.4BSD+Lite2&arch=default&format=html > > It refers to this: https://www.freebsd.org/cgi/man.cgi?query=drum& > sektion=4&apropos=0&manpath=4.4BSD+Lite2 > > The claim is that it came from 3.0BSD. Why was it called drum? I imagine > that's historical license coupled with grad student imagination, but I'm > curious if it has origin in actual hardware used at UC Berkeley. Clem, that > was roughly your era, was it not? > http://www.tuhs.org/cgi-bin/utree.pl?file=3BSD/usr/src/sys/sys/vmdrum.c So there's something called drum in 3BSD. Haven't chased down the MAKEDEV and config glue to turn it into /dev/drum, but it's enough to support the 'It originated in 3BSD'. It certainly wasn't in 32V since that had no paging. This was 1980. Drum memory stopped being a new thing in the early 70's. So it was just recently obsolete. But its typical use was a very small, but very fast, hard drive to swap things to. There never was a drum device, at least a commercial, non-lab experiment, for the VAXen. They all swapped to spinning disks by then. Warner - Dan C. > > > On Fri, Apr 20, 2018 at 12:00 PM, David Collantes <david at collantes.us> > wrote: > >> I found a Wikipedia[0] entry for it. >> >> [0] https://en.m.wikipedia.org/wiki/Drum_memory >> <https://en.m.wikipedia.org/wiki/Drum_memory?wprov=sfti1> >> >> -- >> David Collantes >> +1-407-484-7171 >> >> On Apr 20, 2018, at 11:02, Tim Bradshaw <tfb at tfeb.org> wrote: >> >> I am sure I remember a machine which had this (which would have been >> running a BSD 4.2 port). Is my memory right, and what was it for >> (something related to swap?)? >> >> It is stupidly hard to search for (or, alternatively, there are just no >> hits and the memory is false). >> >> --tim >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180420/520e35dd/attachment.html> ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-20 16:33 ` Warner Losh @ 2018-04-20 19:17 ` Ron Natalie 2018-04-20 20:23 ` Clem Cole 2018-04-20 22:10 ` Dave Horsfall 0 siblings, 2 replies; 59+ messages in thread From: Ron Natalie @ 2018-04-20 19:17 UTC (permalink / raw) I don't even remember a drum for any UNIBUS system. We had a RF-11 fixed head disk on our PDP-11/45 and augmented that with a "bulk core" box that looked like the RF-11 to the system as well. Drums were usually fixed head, although those UNIVACers out there will remember the Fastrand drums which had flying head. Massive units looked like 6' lengths of sewer pipe covered in iron oxide. It was sort of the basis of UNIVAC storage units. ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-20 19:17 ` Ron Natalie @ 2018-04-20 20:23 ` Clem Cole 2018-04-20 22:10 ` Dave Horsfall 1 sibling, 0 replies; 59+ messages in thread From: Clem Cole @ 2018-04-20 20:23 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 1742 bytes --] On Fri, Apr 20, 2018 at 3:17 PM, Ron Natalie <ron at ronnatalie.com> wrote: > > Drums were usually fixed head, although those UNIVACers out there will > remember the Fastrand drums which had flying head. Massive units looked > like 6' lengths of sewer pipe covered in iron oxide. > It was sort of the basis of UNIVAC storage units. > And big and did I say really *noisy* .... We had 4 of them and they so so noisy CMU partitioned the machine room so they they were by themselves. The CPUs and the operators were on the other side of glass wall. I was looking at an old pic of me in the CMU machine room circa '76. You can see console to the 1108 and the top of the door to the 'Fastrand' room (which was also the Vax serial #1 was for space reasons); but unfortunately one of the 360/67's 1Meg memory units (we had 4 - each a was a little larger than Vax 780 cabinet) is in the way, so the drums can not been seen. One thing I'll never forget was when the first time they powered up CMU's first KL10 (which was DEC's first ECL based system). It too was an early unit and DEC Pittsburgh had not been trained on them yet. Unfortunately, the on-site provisioning was mistakenly set up for a KA10, which was horribly insufficient. Well, the tech's blew every circuit breaker in the building - shutting down everything. The emergency lights came on and room was eerily quiet, not a single fan running, but there was this strange humming coming from Fastrand room. There was so much stored energy in the drums, they keep spinning for a very long time. ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180420/22b08357/attachment.html> ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-20 19:17 ` Ron Natalie 2018-04-20 20:23 ` Clem Cole @ 2018-04-20 22:10 ` Dave Horsfall 1 sibling, 0 replies; 59+ messages in thread From: Dave Horsfall @ 2018-04-20 22:10 UTC (permalink / raw) On Fri, 20 Apr 2018, Ron Natalie wrote: > Drums were usually fixed head, although those UNIVACers out there will > remember the Fastrand drums which had flying head. Massive units looked > like 6' lengths of sewer pipe covered in iron oxide. It was sort of the > basis of UNIVAC storage units. Now you have me remembering (always dangerous)... Our DEC Field Circus gingerbeer mentioned that he had to maintain a drum, whereby the drum itself slid back and forth, and he used to take the female operators to, ahem, view it in action... -- Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer." ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-20 16:12 ` Dan Cross 2018-04-20 16:21 ` Clem Cole 2018-04-20 16:33 ` Warner Losh @ 2018-04-22 17:01 ` Lars Brinkhoff 2018-04-22 17:37 ` Clem Cole 2 siblings, 1 reply; 59+ messages in thread From: Lars Brinkhoff @ 2018-04-22 17:01 UTC (permalink / raw) Dan Cross wrote: > Why was it called drum? I imagine that's historical license coupled > with grad student imagination, but I'm curious if it has origin in > actual hardware used at UC Berkeley. Clem, that was roughly your era, > was it not? Seems like the Project Genie 940 at UCB had a drum. Maybe someone wanted to carry the tradition forward. ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-22 17:01 ` Lars Brinkhoff @ 2018-04-22 17:37 ` Clem Cole 2018-04-22 19:14 ` Bakul Shah ` (3 more replies) 0 siblings, 4 replies; 59+ messages in thread From: Clem Cole @ 2018-04-22 17:37 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 1925 bytes --] On Sun, Apr 22, 2018 at 1:01 PM, Lars Brinkhoff <lars at nocrew.org> wrote: > Dan Cross wrote: > > Why was it called drum? I imagine that's historical license coupled > > with grad student imagination, but I'm curious if it has origin in > > actual hardware used at UC Berkeley. Clem, that was roughly your era, > > was it not? > > Seems like the Project Genie 940 at UCB had a drum. Maybe someone > wanted to carry the tradition forward. The 'someone' in all of this was Bill Joy (wnj). As I said, in those days, all of us knew of older systems that used 'paging drums' - it was pretty common term for the hunk-a-storage that the system dedicated to be available to page itself. it really is just like the fact that by the time of the VAX, DEC was not shipping core memories at all (and few 11's shipped with core either as the thanks to Moore's law, the price of semiconductor memory had dropped), so calling the main system memory 'core' was obsolete. Thus, the UNIX term 'core dump' was really meaningless. [In fact, Magic, the OS for the Tektronix Magnolia Machine has 'mos dump' files - because I did that]. But the term 'core file' stuck, tools knew about, as did the programmers. The difference is that todays systems from Windows to UNIX flavors stopped needed a dedicated swapping or paging space and instead was taught to just use empty FS blocks. So today's hacker has grown up without really knowing what /dev/swap or /dev/drum was all about -- in fact that was exactly the question that started this thread. On the other hand, we still 'dump core' and use the core files for debugging. So, while the term 'drum' lost its meaning, 'core file' - might be considered 'quaint' by todays hacker, it still has meaning. Clem ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180422/e829ef12/attachment.html> ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-22 17:37 ` Clem Cole @ 2018-04-22 19:14 ` Bakul Shah 2018-04-22 20:58 ` Mutiny ` (2 subsequent siblings) 3 siblings, 0 replies; 59+ messages in thread From: Bakul Shah @ 2018-04-22 19:14 UTC (permalink / raw) On Sun, 22 Apr 2018 13:37:27 -0400 Clem Cole <clemc at ccc.com> wrote: > available to page itself. it really is just like the fact that by the > time of the VAX, DEC was not shipping core memories at all (and few 11's > shipped with core either as the thanks to Moore's law, the price of > semiconductor memory had dropped), so calling the main system memory 'core' > was obsolete. Thus, the UNIX term 'core dump' was really meaningless. > [In fact, Magic, the OS for the Tektronix Magnolia Machine has 'mos dump' > files - because I did that]. I have wondered if the word "kernel" got used instead of "core" for the essential part of an OS because core was already in use in relation to memory (and with a different derivation). "kernel" may have been used in this context in the '60s but I don't really know who used it first. ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-22 17:37 ` Clem Cole 2018-04-22 19:14 ` Bakul Shah @ 2018-04-22 20:58 ` Mutiny 2018-04-22 22:37 ` Clem cole 2018-04-22 21:51 ` Dave Horsfall 2018-04-23 16:42 ` Tim Bradshaw 3 siblings, 1 reply; 59+ messages in thread From: Mutiny @ 2018-04-22 20:58 UTC (permalink / raw) From: Clem Cole <clemc@ccc.com>Sent: Sun, 22 Apr 2018 23:08:32Thus, the UNIX term 'core dump' was really meaningless.UNIX didn't started in 1978 with the VAX and solid state ram. It started earlier when core was cheaper than ss ram in the dec world which was the case even in '77. Yes there was core, and yes, core dump isn't meaningless@all.... -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180422/b8c75643/attachment.html> ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-22 20:58 ` Mutiny @ 2018-04-22 22:37 ` Clem cole 0 siblings, 0 replies; 59+ messages in thread From: Clem cole @ 2018-04-22 22:37 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 2787 bytes --] Im not sure how you got there. I hardly said or implied that Unix started in 78 or on the Vax. I’m fairly aware that Ken had core in his PDP-7, as did many of us (including myself) on our many of PDP-11s in the old times. I started with Fifth and Sixth Edition. Dumping core had meaning to all of us in those days. All I said was that by the time of the VAX - which very much did spread the gospel of Unix to the world - core memory had fallen from favor and widespread use. The PDP11 was the last system DEC released core. BTW if you want to be correct about dates - the DEC released the Vax in 76 not 78 ( I personally used to program Vax serial #1 at CMU under VMS 1.0 before I was at UCB which is what Dan had asked). Also FWIW. Bill and Ozzie whom I knew from my UCB days, did not do the UCB Vax work until a few years later when the UCB EECS Dept got their first Berkeley VAX and Prof Fateman who had come from MIT wanted to run MAC Lisp on it - which very much required VM support. V32 ( the V7 port for the Vax from AT&T) swapped. Bill and Ozzie added paging support which was released as BSD 3.0 (Ozzie graduates and left for Cornell as I recall but where he went is fuzzy in my memory now). Primarily Bill but with the help of us others followed quickly with 4.0 and 4.1 (the later being the first wide spread Vax release) with his Fastvax support. Later still in the early 1980s Bill lead the CRSG team and released 4.1a/b/c. Bill left for Sun and I left for Masscomp. Kirk, Sam and Keith pushed out 4.2 et al to the final NET2 release which was after my time. Of those on the list from UCB which was my time there was also Mary Ann Horton in EECS and Debbie S. who was up the hill at LBL and who both also sometimes reply - I do look to them correct/affirm me for UCB history. But I personally go back in Unix history to the early/mid 1970s at CMU. I often defer to Noel C for memories of networking things which he was a part at MIT from the same time frame. I defer to Doug, Steve and obviously Ken for things at ATT and history before Fifth Edition. Sent from my PDP-7 Running UNIX V0 expect things to be almost but not quite. > On Apr 22, 2018, at 4:58 PM, Mutiny <mutiny.mutiny at rediffmail.com> wrote: > > From: Clem Cole <clemc at ccc.com> > Sent: Sun, 22 Apr 2018 23:08:32 > Thus, the UNIX term 'core dump' was really meaningless. > UNIX didn't started in 1978 with the VAX and solid state ram. It started earlier when core was cheaper than ss ram in the dec world which was the case even in '77. Yes there was core, and yes, core dump isn't meaningless at all. > ... -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180422/5357fe3c/attachment.html> ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-22 17:37 ` Clem Cole 2018-04-22 19:14 ` Bakul Shah 2018-04-22 20:58 ` Mutiny @ 2018-04-22 21:51 ` Dave Horsfall 2018-04-25 1:27 ` Dan Stromberg 2018-04-23 16:42 ` Tim Bradshaw 3 siblings, 1 reply; 59+ messages in thread From: Dave Horsfall @ 2018-04-22 21:51 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 758 bytes --] On Sun, 22 Apr 2018, Clem Cole wrote: > The difference is that todays systems from Windows to UNIX flavors > stopped needed a dedicated swapping or paging space and instead was > taught to just use empty FS blocks. So today's hacker has grown up > without really knowing what /dev/swap or /dev/drum was all about -- in > fact that was exactly the question that started this thread. Heh heh... I remember the day that Basser (SydU) got AUSAM running on their spanking new VAX. It crashed, and it was traced to it swapping for the first time in its life (before that, it merely paged). Now, how many youngsters know the difference between paging and swapping? -- Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer." ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-22 21:51 ` Dave Horsfall @ 2018-04-25 1:27 ` Dan Stromberg 2018-04-25 12:18 ` Ronald Natalie 0 siblings, 1 reply; 59+ messages in thread From: Dan Stromberg @ 2018-04-25 1:27 UTC (permalink / raw) On Sun, Apr 22, 2018 at 2:51 PM, Dave Horsfall <dave at horsfall.org> wrote: > Now, how many youngsters know the difference between paging and swapping? I'm a mere 52, but I believe paging is preferred over swapping. Swapping is an entire process at a time. Paging is just a page of memory at a time - like 4K or something thereabout. BTW: I love Linux :) at least until something better comes along. ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-25 1:27 ` Dan Stromberg @ 2018-04-25 12:18 ` Ronald Natalie 2018-04-25 13:39 ` Tim Bradshaw ` (2 more replies) 0 siblings, 3 replies; 59+ messages in thread From: Ronald Natalie @ 2018-04-25 12:18 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 856 bytes --] > On Apr 24, 2018, at 9:27 PM, Dan Stromberg <drsalists at gmail.com> wrote: > > On Sun, Apr 22, 2018 at 2:51 PM, Dave Horsfall <dave at horsfall.org> wrote: >> Now, how many youngsters know the difference between paging and swapping? > > I'm a mere 52, but I believe paging is preferred over swapping. > > Swapping is an entire process at a time. > > Paging is just a page of memory at a time - like 4K or something thereabout. Early pages were 1K. The fun argument is what is Virtual Memory. Typically, people align that with paging but you can stretch the definition to cover paging. This was a point of contention in the early VAX Unix days as the ATT (System III, even V?) didn’t support paging on the VAX where as BSD did. Our comment was that “It ain’t VIRTUAL memory if it isn’t all there” as opposed to virtual addressing. ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-25 12:18 ` Ronald Natalie @ 2018-04-25 13:39 ` Tim Bradshaw 2018-04-25 14:02 ` arnold 2018-04-25 14:33 ` Ian Zimmerman 2018-04-25 20:29 ` Paul Winalski 2 siblings, 1 reply; 59+ messages in thread From: Tim Bradshaw @ 2018-04-25 13:39 UTC (permalink / raw) On 25 Apr 2018, at 13:18, Ronald Natalie <ron at ronnatalie.com> wrote: > > Early pages were 1K. Do systems with huge pages page in the move-them-to-disk sense I wonder? I assume they don't in practice because it would be insane but I wonder if the VM system is in theory even willing to try. Something I never completely understood in the paging vs swapping thing was that I think that systems which could page (well, 4.xBSD in particular) would *also* swap if pushed. I think the reason for that was that, if you were really short of memory, swapping freed up the process structure and also the page tables &c for the process, which would still be needed even if all its pages had been evicted. Is that right? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180425/f8f9875a/attachment.html> ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-25 13:39 ` Tim Bradshaw @ 2018-04-25 14:02 ` arnold 2018-04-25 14:59 ` tfb 0 siblings, 1 reply; 59+ messages in thread From: arnold @ 2018-04-25 14:02 UTC (permalink / raw) > On 25 Apr 2018, at 13:18, Ronald Natalie <ron at ronnatalie.com> wrote: > > > > Early pages were 1K. Tim Bradshaw <tfb at tfeb.org> wrote: > Do systems with huge pages page in the move-them-to-disk sense I wonder? > I assume they don't in practice because it would be insane but I wonder > if the VM system is in theory even willing to try. Why not? If there's enough backing store availble? Note that many systems demand page-in the code section straight out of the executable, so if some of those pages aren't needed, they can just be released. And said pages can be shared among all processes running the same executable, for further savings. > Something I never completely understood in the paging vs swapping > thing was that I think that systems which could page (well, 4.xBSD in > particular) would *also* swap if pushed. I think the reason for that was > that, if you were really short of memory, swapping freed up the process > structure and also the page tables &c for the process, which would still > be needed even if all its pages had been evicted. Is that right? It depends upon the system. Some had pageable page tables, which is pretty hairy. Others didn't. I don't remember what 4BSD did on the Vax, but I suspect that the page tables and enough info to find everything on swap stayed in kernel memory. (Where's Chris Torek when you need him? :-) But yes, swapping was generally used to free up large amounts of memory if under heavy load. HTH, Arnold ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-25 14:02 ` arnold @ 2018-04-25 14:59 ` tfb 0 siblings, 0 replies; 59+ messages in thread From: tfb @ 2018-04-25 14:59 UTC (permalink / raw) On 25 Apr 2018, at 15:02, arnold at skeeve.com wrote: > > Why not? If there's enough backing store availble? Well, I think because the situations where you are using huge pages and the situations where you're paging to backing store should really be mutually exclusive: if your system is paging to backing store you have problems which huge pages are not going to solve. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180425/cc0ae82a/attachment.html> ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-25 12:18 ` Ronald Natalie 2018-04-25 13:39 ` Tim Bradshaw @ 2018-04-25 14:33 ` Ian Zimmerman 2018-04-25 14:46 ` Larry McVoy 2018-04-25 20:29 ` Paul Winalski 2 siblings, 1 reply; 59+ messages in thread From: Ian Zimmerman @ 2018-04-25 14:33 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 935 bytes --] On 2018-04-25 08:18, Ronald Natalie wrote: > The fun argument is what is Virtual Memory. Typically, people align > that with paging but you can stretch the definition to cover paging. > This was a point of contention in the early VAX Unix days as the ATT > (System III, even V?) didn’t support paging on the VAX where as BSD > did. Our comment was that “It ain’t VIRTUAL memory if it isn’t all > there” as opposed to virtual addressing. What about overlays? Virtual or not? The main difference may be where the code lives which brings non-present address space back into directly addressable state: in the kernel (and where: in an interrupt handler or higher up) or in userspace. -- Please don't Cc: me privately on mailing lists and Usenet, if you also post the followup to the list or newsgroup. To reply privately _only_ on Usenet and on broken lists which rewrite From, fetch the TXT record for no-use.mooo.com. ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-25 14:33 ` Ian Zimmerman @ 2018-04-25 14:46 ` Larry McVoy 2018-04-25 15:03 ` ron minnich 0 siblings, 1 reply; 59+ messages in thread From: Larry McVoy @ 2018-04-25 14:46 UTC (permalink / raw) So is this also /dev/drums? https://www.youtube.com/watch?v=_pe6r06EIz0&feature=youtu.be -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-25 14:46 ` Larry McVoy @ 2018-04-25 15:03 ` ron minnich 0 siblings, 0 replies; 59+ messages in thread From: ron minnich @ 2018-04-25 15:03 UTC (permalink / raw) some of you doubtless remember the term 'expansion swap'. I was joking that at some companies, you seem to be moved just so you can end up in places where you have less room. I called this a 'compression swap'. Nobody got the joke. Oh well. ron -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180425/903cbb0d/attachment.html> ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-25 12:18 ` Ronald Natalie 2018-04-25 13:39 ` Tim Bradshaw 2018-04-25 14:33 ` Ian Zimmerman @ 2018-04-25 20:29 ` Paul Winalski 2018-04-25 20:45 ` Larry McVoy 2018-04-25 23:01 ` Bakul Shah 2 siblings, 2 replies; 59+ messages in thread From: Paul Winalski @ 2018-04-25 20:29 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 1950 bytes --] On 4/25/18, Ronald Natalie <ron at ronnatalie.com> wrote: > > Early pages were 1K. On the VAX, pages were 512 bytes, the same size as a disk record. > The fun argument is what is Virtual Memory. Typically, people align that > with paging but you can stretch the definition to cover paging. > This was a point of contention in the early VAX Unix days as the ATT (System > III, even V?) didn’t support paging on the VAX where as BSD did. In my book, virtual memory is any addressing scheme where the addresses that the program uses are different from the physical memory addresses. Nowadays most OSes use a scheme where each process has its own virtual address space, and virtual memory is demand-paged from backing store on disk. But there have been other schemes. Some PDP-11 models had a virtual addressing feature called PLAS (Program Logical Address Space). The PDP-11 had 16-bit addressing, allowing for at most 64K per process. To take advantage of physical memory larger than 64K, PLAS allowed multiple 64K virtual address spaces to be mapped to the larger physical memory. Sort of the reverse of the usual virtual addressing scheme, where there is more virtual memory than physical memory. IBM Disk Operating System for the System/370 (DOS/VS) had a single virtual address space that could be bigger than physical memory and was demand-paged. This address space could be divided into up to five partitions, each of which could run a program. Each partition thus represents what we now call a process. IBM OS/VS1 and OS/VS2 SVS also had a single virtual address space shared by all tasks (processes). OS/VS2 MVS had the modern concept of a separate virtual address space for each process. > Our comment was that “It ain’t VIRTUAL memory if it isn’t all there” as > opposed to virtual addressing. Gene Amdahl was not a fan of paging. He said that virtual memory merely magnifies the need for real memory. -Paul W. ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-25 20:29 ` Paul Winalski @ 2018-04-25 20:45 ` Larry McVoy 2018-04-25 21:14 ` Lawrence Stewart 2018-04-25 23:01 ` Bakul Shah 1 sibling, 1 reply; 59+ messages in thread From: Larry McVoy @ 2018-04-25 20:45 UTC (permalink / raw) > > Our comment was that ???It ain???t VIRTUAL memory if it isn???t all there??? as > > opposed to virtual addressing. > > Gene Amdahl was not a fan of paging. He said that virtual memory > merely magnifies the need for real memory. Wasn't that a Seymour Cray quote? Seems like he was also not a fan of VM. ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-25 20:45 ` Larry McVoy @ 2018-04-25 21:14 ` Lawrence Stewart 2018-04-25 21:30 ` ron minnich 0 siblings, 1 reply; 59+ messages in thread From: Lawrence Stewart @ 2018-04-25 21:14 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 808 bytes --] > On 2018, Apr 25, at 4:45 PM, Larry McVoy <lm at mcvoy.com> wrote: > >>> Our comment was that ???It ain???t VIRTUAL memory if it isn???t all there??? as >>> opposed to virtual addressing. >> >> Gene Amdahl was not a fan of paging. He said that virtual memory >> merely magnifies the need for real memory. > > Wasn't that a Seymour Cray quote? Seems like he was also not a fan of VM. My favorite is due to Dave Clark at MIT: “Anytime someone says ‘virtual’ you should think ‘slow’” followed closely by “Any problem in computer science can be solved by adding a layer of abstraction” and “Any performance problem can be solved by removing a layer of abstraction.” As one converted to the HPC cults, I am naturally opposed to small pages, virtual anything, and interrupts… -L ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-25 21:14 ` Lawrence Stewart @ 2018-04-25 21:30 ` ron minnich 0 siblings, 0 replies; 59+ messages in thread From: ron minnich @ 2018-04-25 21:30 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 1257 bytes --] there's an interesting parallel discussion going on over at riscv about page size. 4k won the day. fairly shocking in a world of 4 TiB systems but ... ron On Wed, Apr 25, 2018 at 2:21 PM Lawrence Stewart <stewart at serissa.com> wrote: > > > On 2018, Apr 25, at 4:45 PM, Larry McVoy <lm at mcvoy.com> wrote: > > > >>> Our comment was that ???It ain???t VIRTUAL memory if it isn???t all > there??? as > >>> opposed to virtual addressing. > >> > >> Gene Amdahl was not a fan of paging. He said that virtual memory > >> merely magnifies the need for real memory. > > > > Wasn't that a Seymour Cray quote? Seems like he was also not a fan of > VM. > > My favorite is due to Dave Clark at MIT: “Anytime someone says ‘virtual’ > you should think ‘slow’” > > followed closely by “Any problem in computer science can be solved by > adding a layer of abstraction” and “Any performance problem can be solved > by removing a layer of abstraction.” > > As one converted to the HPC cults, I am naturally opposed to small pages, > virtual anything, and interrupts… > > -L > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180425/71b1f291/attachment.html> ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-25 20:29 ` Paul Winalski 2018-04-25 20:45 ` Larry McVoy @ 2018-04-25 23:01 ` Bakul Shah 1 sibling, 0 replies; 59+ messages in thread From: Bakul Shah @ 2018-04-25 23:01 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 1095 bytes --] This is what we did at Fortune Systems for our 68k based v7 system. There was an external “mmu” which added a base value to a 16 bit virtual address to compute a physical address. And compared against a limit. There were four base,limit pairs that you had to rewrite to context switch: Text, data, spare and stack. At a minimum the system shipped with 256KB so you could have a number of processes memory resident. You swapped out a complete segment when you ran out of space. I imagine other 16bit word size machines of that era used similar schemes. > On Apr 25, 2018, at 1:29 PM, Paul Winalski <paul.winalski at gmail.com> wrote: > > Some PDP-11 models had a virtual addressing feature called PLAS > (Program Logical Address Space). The PDP-11 had 16-bit addressing, > allowing for at most 64K per process. To take advantage of physical > memory larger than 64K, PLAS allowed multiple 64K virtual address > spaces to be mapped to the larger physical memory. Sort of the > reverse of the usual virtual addressing scheme, where there is more > virtual memory than physical memory. ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-22 17:37 ` Clem Cole ` (2 preceding siblings ...) 2018-04-22 21:51 ` Dave Horsfall @ 2018-04-23 16:42 ` Tim Bradshaw 2018-04-23 17:30 ` Ron Natalie 3 siblings, 1 reply; 59+ messages in thread From: Tim Bradshaw @ 2018-04-23 16:42 UTC (permalink / raw) On 22 Apr 2018, at 18:37, Clem Cole <clemc at ccc.com> wrote: > > But the term 'core file' stuck, tools knew about, as did the programmers. The difference is that todays systems from Windows to UNIX flavors stopped needed a dedicated swapping or paging space and instead was taught to just use empty FS blocks. So today's hacker has grown up without really knowing what /dev/swap or /dev/drum was all about -- in fact that was exactly the question that started this thread. Well, I had known but forgotten in fact. There's also a distinction between whether a system swaps/pages onto a dedicated device and whether it exposes that device by some special name in /dev. Solaris does (or did until fairly recently: I don't remember what the ZFSy systems do) generally use one or more special devices (not usually whole disks but they could be, and it could swap on files but not, I assume, write crash dumps to them) but I'm pretty sure they were not exposed as /dev/<somethng> other than the name they would already have in /dev/(r)dsk. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180423/5ff61877/attachment.html> ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-23 16:42 ` Tim Bradshaw @ 2018-04-23 17:30 ` Ron Natalie 2018-04-23 17:51 ` Clem Cole 0 siblings, 1 reply; 59+ messages in thread From: Ron Natalie @ 2018-04-23 17:30 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 583 bytes --] Ø Well, I had known but forgotten in fact. There's also a distinction between whether a system swaps/pages onto a dedicated device and whether it exposes that device by some special name in /dev. Im pretty sure that swapping in V6 took place to a major/minor number configured at kernel build time. You could create a dev node for the swap device, but it wasnt used for the actual swapping. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180423/ac648753/attachment.html> ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-23 17:30 ` Ron Natalie @ 2018-04-23 17:51 ` Clem Cole 2018-04-23 18:30 ` Ron Natalie 2018-04-23 20:47 ` Grant Taylor 0 siblings, 2 replies; 59+ messages in thread From: Clem Cole @ 2018-04-23 17:51 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 1364 bytes --] On Mon, Apr 23, 2018 at 1:30 PM, Ron Natalie <ron at ronnatalie.com> wrote: > > > > > Ø Well, I had known but forgotten in fact. There's also a distinction > between whether a system swaps/pages onto a dedicated device and whether it > exposes that device by some special name in /dev. > > > > I’m pretty sure that swapping in V6 took place to a major/minor number > configured at kernel build time. You could create a dev node for the swap > device, but it wasn’t used for the actual swapping. > Exactly... For instance an RK04 was less that 5K blocks (4620 or some such - I've forgotten the actually amount). The disk was mkfs'ed to the first 4K and the left over was give to the swap system. By the time of 4.X, the RP06 was 'partitioned' into 'rings' (some overlapping). The 'a' partition was root, the 'b' was swap and one fo the others was the rest. Later the 'c' was a short form for copying the entire disk. What /dev/drum did was allow to cobble up those hunks of reserved space and view them as a single device. I've forgotten what user space programs used it, probably some of the tools like ps, vmstat *etc*. You'd have to look that the sources. > > > ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180423/b2b7146b/attachment.html> ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-23 17:51 ` Clem Cole @ 2018-04-23 18:30 ` Ron Natalie 2018-04-25 14:02 ` Tom Ivar Helbekkmo 2018-04-23 20:47 ` Grant Taylor 1 sibling, 1 reply; 59+ messages in thread From: Ron Natalie @ 2018-04-23 18:30 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 1338 bytes --] RK05’s were 4872 blocks. Don’t know why that number has stuck with me, too many invocations of mkfs I guess. Oddly, DEC software for some reason never used the last 72 blocks. We used to the standalone ROLLIN to make backup copies of our root file system and I remember having to patch the program to make sure it copied all the blocks. Yeah, I remembrer swaplo and swapdev. We actually dedicated a full 1024 block RF11 fixed head to the system in the early days until we got the system of RK05 packs. I remember buying my personal RK05 pack for $75 thinking that was a lot of personal storage (2.4MB!). Amusingly, JHU believe in vol copy for backups. Our 80 Mb drive was divided into multiples of RK05 pack. The root file system was 5x4872 blocks. The main user file sytem was another 5. The rest was divided up into single RK05 size things. I remember writing a standalone volcopy to replace ROLLIN that was called “The Fabulous Amtork” program. The 80M drive was an Ampex. AM to RK, get it. I have no idea when or why the adjective “fabulous” became associated with it but that was what it was always called. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180423/f731a05d/attachment-0001.html> ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-23 18:30 ` Ron Natalie @ 2018-04-25 14:02 ` Tom Ivar Helbekkmo 2018-04-25 14:38 ` Clem Cole 0 siblings, 1 reply; 59+ messages in thread From: Tom Ivar Helbekkmo @ 2018-04-25 14:02 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 903 bytes --] Ron Natalie <ron at ronnatalie.com> writes: > RK05’s were 4872 blocks. Don’t know why that number has stuck with > me, too many invocations of mkfs I guess. Oddly, DEC software for > some reason never used the last 72 blocks. I guess that's because they implemented DEC Standard 144, known as bad144 in BSD Unix. It's a system of remapping bad sectors to spares at the end of the disk. I had fun implementing that for the wd (ST506) disk driver in NetBSD, once upon a time... -tih -- Most people who graduate with CS degrees don't understand the significance of Lisp. Lisp is the most important idea in computer science. --Alan Kay -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180425/03203dcd/attachment.sig> ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-25 14:02 ` Tom Ivar Helbekkmo @ 2018-04-25 14:38 ` Clem Cole 0 siblings, 0 replies; 59+ messages in thread From: Clem Cole @ 2018-04-25 14:38 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 3276 bytes --] On Wed, Apr 25, 2018 at 10:02 AM, Tom Ivar Helbekkmo via TUHS < tuhs at minnie.tuhs.org> wrote: > Ron Natalie <ron at ronnatalie.com> writes: > > > RK05’s were 4872 blocks. Don’t know why that number has stuck with > > me, too many invocations of mkfs I guess. Oddly, DEC software for > > some reason never used the last 72 blocks. > > I guess that's because they implemented DEC Standard 144, known as > bad144 in BSD Unix. It's a system of remapping bad sectors to spares at > the end of the disk. I had fun implementing that for the wd (ST506) > disk driver in NetBSD, once upon a time... Right - back in the day, certainly for RK05's we would purchase 'perfect' media, because the original Unix implementations did not handled bad blocks. You had to be careful, but you could purchase perfect RP06 disk packs from one or two vendors. That got harder to do as the disks got larger *i.e*. the RMxx series. A disk pack pack typically shipped with either a physical piece of paper or a sticker attached to them with the HD/CLY bit offset and/or if pre-formatted HD/CLY/SEC of an bad spots (int he old days disks used different formats depending on the controller and different sector sizes, meta data etc, so the bad spots were listed a bit position per ring). A typical thing that a number of us did if we did not have a perfect pack was, after we did the Unix mkfs, manually create a file in either root, or lost+found called bad_blocks, and then using fsdb or the like, assign the blocks that file. The problem was the concept of 'grown' bad spots. I'll stay away from the argument if they were possible or not (long story), but if in practice a pack ended up with a block that was causing issues after it was deployed (which did happen in practice). You needed to assign the bad block to the bad_block file manual. BAD144 was a scheme DEC had to reserve a few blocks and then map out any bad sectors. The problem of course, was that it screwed up all the prediction SW in the OS, as system would ask for block/cyl/sec - X/Y/Z and it could force a head seek to the of the drive to get one of those reserved blocks in the end of the disk. The advantage of the DEC scheme was the during formatting the known bad sectors would become 'invisible' and of course if the pack 'grew' and error the driver could add it to the list and replace the block on the fly (assuming there were available blocks in the reserved table). An interesting story on BAD144 - a good example of the original ideas of Open Source. It was actually the DEC 'Telephone Industries Group" in Merrimack, NH (*a.k.a.* TIG) that had Armando, Fred Canter *et. al.* that wrote the original support for DEC Standard 144; to help support the AT&T customers, pre Ultrix (remember for a long time AT&T was DEC'S largest customer). Once written, that code was also given to CRSG and released into the wild via the UCB stream (and of course would become part of Ultrix when DEC finally 'supported' UNIX officially). I've long ago forgotten, but aps might remember, I think is Fred that did the original work. ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180425/75b40b16/attachment-0001.html> ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-23 17:51 ` Clem Cole 2018-04-23 18:30 ` Ron Natalie @ 2018-04-23 20:47 ` Grant Taylor 2018-04-23 21:06 ` Clem Cole 2018-04-23 22:07 ` [TUHS] /dev/drum Tim Bradshaw 1 sibling, 2 replies; 59+ messages in thread From: Grant Taylor @ 2018-04-23 20:47 UTC (permalink / raw) On 04/23/2018 11:51 AM, Clem Cole wrote: > By the time of 4.X, the RP06 was 'partitioned' into 'rings' (some > overlapping). The 'a' partition was root, the 'b' was swap and one fo > the others was the rest. Later the 'c' was a short form for copying > the entire disk. I had always wondered where Solaris (SunOS) got it's use of the different slices, including the slice that was the entire disk from. Now I'm guessing Solaris got it from SunOS which got it from 4.x BSD. -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3982 bytes Desc: S/MIME Cryptographic Signature URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180423/4b59e732/attachment.bin> ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-23 20:47 ` Grant Taylor @ 2018-04-23 21:06 ` Clem Cole 2018-04-23 21:14 ` Dan Mick 2018-04-23 22:07 ` [TUHS] /dev/drum Tim Bradshaw 1 sibling, 1 reply; 59+ messages in thread From: Clem Cole @ 2018-04-23 21:06 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 2049 bytes --] On Mon, Apr 23, 2018 at 4:47 PM, Grant Taylor via TUHS <tuhs at minnie.tuhs.org > wrote: > On 04/23/2018 11:51 AM, Clem Cole wrote: > >> By the time of 4.X, the RP06 was 'partitioned' into 'rings' (some >> overlapping). The 'a' partition was root, the 'b' was swap and one fo the >> others was the rest. Later the 'c' was a short form for copying the entire >> disk. >> > > I had always wondered where Solaris (SunOS) got it's use of the different > slices, including the slice that was the entire disk from. > > Now I'm guessing Solaris got it from SunOS which got it from 4.x BSD It was not BSD - it was research. It may have been in 6th, but it was definitely in 7th. Cut/pasted from the V7 PDP-11 rp(4) man page: *NAME* rp − RP-11/RP03 moving-head disk *DESCRIPTION* The files rp0 ... rp7 refer to sections of RP disk drive 0. The files rp8 ... rp15 refer to drive 1 etc. This allows a large disk to be broken up into more manageable pieces. The origin and size of the pseudo-disks on each drive are as follows: disk start length 0 0 81000 1 0 5000 2 5000 2000 3 7000 74000 4-7 unassigned Thus rp0 covers the whole drive, while rp1, rp2, rp3 can serve usefully as a root, swap, and mounted user file system respectively. The rp files access the disk via the system’s normal buffering mechanism and may be read and written without regard to physical disk records. There is also a ‘raw’ interface which provides for direct transmission between the disk and the user’s read or write buffer. A single read or write call results in exactly one I/O operation and therefore raw I/O is considerably more efficient when many words are transmitted. The names of the raw RP files begin with rrp and end with a number which selects the same disk section as the corresponding rp file. In raw I/O the buffer must begin on a word boundary. ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180423/c65f3098/attachment-0001.html> ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-23 21:06 ` Clem Cole @ 2018-04-23 21:14 ` Dan Mick 2018-04-23 21:27 ` Clem Cole 0 siblings, 1 reply; 59+ messages in thread From: Dan Mick @ 2018-04-23 21:14 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 2481 bytes --] On 04/23/2018 02:06 PM, Clem Cole wrote: > > > On Mon, Apr 23, 2018 at 4:47 PM, Grant Taylor via TUHS > <tuhs at minnie.tuhs.org <mailto:tuhs at minnie.tuhs.org>> wrote: > > On 04/23/2018 11:51 AM, Clem Cole wrote: > > By the time of 4.X, the RP06 was 'partitioned' into 'rings' > (some overlapping). The 'a' partition was root, the 'b' was > swap and one fo the others was the rest. Later the 'c' was a > short form for copying the entire disk. > > > I had always wondered where Solaris (SunOS) got it's use of the > different slices, including the slice that was the entire disk from. > > Now I'm guessing Solaris got it from SunOS which got it from 4.x BSD > > It was not BSD - it was research. It may have been in 6th, but it was > definitely in 7th. Cut/pasted from the V7 PDP-11 rp(4) man page: > > *NAME* > > rp − RP-11/RP03 moving-head disk > > *DESCRIPTION* > > The files rp0 ... rp7 refer to sections of RP disk drive 0. The > files rp8 ... rp15 refer to drive 1 etc. This > > allows a large disk to be broken up into more manageable pieces. > > The origin and size of the pseudo-disks on each drive are as > follows: > > disk start length > > 0 0 81000 > > 1 0 5000 > > 2 5000 2000 > > 3 7000 74000 > > 4-7 unassigned > > Thus rp0 covers the whole drive, while rp1, rp2, rp3 can serve > usefully as a root, swap, and mounted user > > file system respectively. > > The rp files access the disk via the system’s normal buffering > mechanism and may be read and written > > without regard to physical disk records. There is also a ‘raw’ > interface which provides for direct transmission > > between the disk and the user’s read or write buffer. A single > read or write call results in exactly one > > I/O operation and therefore raw I/O is considerably more > efficient when many words are transmitted. The > > names of the raw RP files begin with rrp and end with a number > which selects the same disk section as the > > corresponding rp file. > > In raw I/O the buffer must begin on a word boundary. > > ᐧ But...that has numbers, not letters, and the third partition is not the whole drive, the first one is....? ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-23 21:14 ` Dan Mick @ 2018-04-23 21:27 ` Clem Cole 2018-04-24 3:28 ` [TUHS] 3330s, 3340s, Winchesters... (was: /dev/drum) Greg 'groggy' Lehey 0 siblings, 1 reply; 59+ messages in thread From: Clem Cole @ 2018-04-23 21:27 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 3829 bytes --] On Mon, Apr 23, 2018 at 5:14 PM, Dan Mick <danmick at gmail.com> wrote: > On 04/23/2018 02:06 PM, Clem Cole wrote: > > > > > > On Mon, Apr 23, 2018 at 4:47 PM, Grant Taylor via TUHS > > <tuhs at minnie.tuhs.org <mailto:tuhs at minnie.tuhs.org>> wrote: > > > > On 04/23/2018 11:51 AM, Clem Cole wrote: > > > > By the time of 4.X, the RP06 was 'partitioned' into 'rings' > > (some overlapping). The 'a' partition was root, the 'b' was > > swap and one fo the others was the rest. Later the 'c' was a > > short form for copying the entire disk. > > > > > > I had always wondered where Solaris (SunOS) got it's use of the > > different slices, including the slice that was the entire disk from. > > > > Now I'm guessing Solaris got it from SunOS which got it from 4.x BSD > > > > It was not BSD - it was research. It may have been in 6th, but it was > > definitely in 7th. Cut/pasted from the V7 PDP-11 rp(4) man page: > > > > *NAME* > > > > rp − RP-11/RP03 moving-head disk > > > > *DESCRIPTION* > > > > The files rp0 ... rp7 refer to sections of RP disk drive 0. The > > files rp8 ... rp15 refer to drive 1 etc. This > > > > allows a large disk to be broken up into more manageable pieces. > > > > The origin and size of the pseudo-disks on each drive are as > > follows: > > > > disk start length > > > > 0 0 81000 > > > > 1 0 5000 > > > > 2 5000 2000 > > > > 3 7000 74000 > > > > 4-7 unassigned > > > > Thus rp0 covers the whole drive, while rp1, rp2, rp3 can serve > > usefully as a root, swap, and mounted user > > > > file system respectively. > > > > The rp files access the disk via the system’s normal buffering > > mechanism and may be read and written > > > > without regard to physical disk records. There is also a ‘raw’ > > interface which provides for direct transmission > > > > between the disk and the user’s read or write buffer. A single > > read or write call results in exactly one > > > > I/O operation and therefore raw I/O is considerably more > > efficient when many words are transmitted. The > > > > names of the raw RP files begin with rrp and end with a number > > which selects the same disk section as the > > > > corresponding rp file. > > > > In raw I/O the buffer must begin on a word boundary. > > > > ᐧ > > But...that has numbers, not letters, and the third partition is not the > whole drive, the first one is....? > Yup -- disk were pretty expensive in those days ($20-30K for a <100M drive) so often people did not have more than one. So they started with rp1, rp2 etc.. As disks dropped a little cheaper and having more than one RP06 became possible (RP06 aka IBM 3330 - project Winchester -- was a huge 200M drive - we had 3 on the Teklabs machine and that was considered very, very generous), then letters became the convention used in /dev/. i.e. /dev/{,r}rp0{a,b,c,d..} for each of the minor numbers. To be honest, I really don't remember - but I know we used letters for the different partitions on the 11/70 before BSD showed up. The reason for the partition originally was (and it must have been 6th edition when I first saw it), DEC finally made a disk large enough that number of blocks overflowed a 16 bit integer. So splitting the disk into smaller partitions allowed the original seek(2) to work without overflow. V7 introduced lseek(2) when the offset was a long. Clem ᐧ ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180423/faf9548e/attachment.html> ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] 3330s, 3340s, Winchesters... (was: /dev/drum) 2018-04-23 21:27 ` Clem Cole @ 2018-04-24 3:28 ` Greg 'groggy' Lehey 2018-04-24 11:43 ` Paul Winalski 0 siblings, 1 reply; 59+ messages in thread From: Greg 'groggy' Lehey @ 2018-04-24 3:28 UTC (permalink / raw) On Monday, 23 April 2018 at 17:27:29 -0400, Clem Cole wrote: > > As disks dropped a little cheaper and having more than one RP06 > became possible (RP06 aka IBM 3330 - project Winchester You're confusing the 3330 with the 3340: the latter was the Winchester, the first disk with an HDA. The 3330 was the old-style disk pack in a cheese bell. A variant (apparently not from IBM; CDC maybe?) of the same disk pack stored 300 MB, and we used a lot of them at Tandem in the 1970s and early 1980s. I suppose they were pretty widespread. Greg -- Sent from my desktop computer. Finger grog at lemis.com for PGP public key. See complete headers for address and phone numbers. This message is digitally signed. If your Microsoft mail program reports problems, please read http://lemis.com/broken-MUA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: not available URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180424/d723123f/attachment.sig> ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] 3330s, 3340s, Winchesters... (was: /dev/drum) 2018-04-24 3:28 ` [TUHS] 3330s, 3340s, Winchesters... (was: /dev/drum) Greg 'groggy' Lehey @ 2018-04-24 11:43 ` Paul Winalski 2018-04-24 13:13 ` Clem Cole 0 siblings, 1 reply; 59+ messages in thread From: Paul Winalski @ 2018-04-24 11:43 UTC (permalink / raw) On 4/23/18, Greg 'groggy' Lehey <grog at lemis.com> wrote: > > You're confusing the 3330 with the 3340: the latter was the > Winchester, the first disk with an HDA. The 3330 was the old-style > disk pack in a cheese bell. A variant (apparently not from IBM; CDC > maybe?) of the same disk pack stored 300 MB, and we used a lot of them > at Tandem in the 1970s and early 1980s. I suppose they were pretty > widespread. The 3330 was, as you say, a conventional (for the day) disk drive where the heads remain with the drive and you removed the platters with a plastic cover very much like a cheese bell. The 3340 was the first IBM drive where the heads were sealed with the media. The disk packs looked somewhat like the front end of the Starship Enterprise, with something like a roll-top desk cover at the back. You put the pack in the drive, the drive opened the roll-top desk and plugged into the back of the head assembly, and you were in business. After a few years IBM discovered that nobody was removing their disks anymore, and so the 3340's follow-ons were not removable. But they still used the sealed-media technology still in use in hard drives today. Regarding the Winchester code name, I've argued about this with Clem before. Clem claims that the code name refers to various advances in disk technology first released in the 3330's disk packs. Wikipedia and my own memory agree with you that Winchester referred to the 3340. -Paul W. ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] 3330s, 3340s, Winchesters... (was: /dev/drum) 2018-04-24 11:43 ` Paul Winalski @ 2018-04-24 13:13 ` Clem Cole 2018-04-25 0:52 ` Greg 'groggy' Lehey 0 siblings, 1 reply; 59+ messages in thread From: Clem Cole @ 2018-04-24 13:13 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 1905 bytes --] On Tue, Apr 24, 2018 at 7:43 AM, Paul Winalski <paul.winalski at gmail.com> wrote: > Clem claims that the code name refers to various advances in > disk technology first released in the 3330's disk packs. Wikipedia > and my own memory agree with you that Winchester referred to the 3340. Interesting ... My source was (is) my friend and former colleague from Stellar, Russ Robelen, who was the HW lead for the 360/50 and the IBM ACS systems. Russ said the original IBM project Winchester first begat the platter (as Noel pointed out was so named because of the 30-30 capacity of the original disk) - which showed up in the 3330 disks as well as the sealed head stuff that Paul and Greg are talking about. What I never asked him, was where Memorex fit it. It was a somehow a joint project with them, and they got at least some of the technology -- in fact DEC's OEM for the RP05/RP06 was Memorex [the big box of logic on the side of the DEC version was the Massbus to IBM I/O logic converter]. It is also true that real lasting piece of project Winchester was the embedded (sealed head) stuff that came from the 3340. I should ask Russ what he remembers, on this. I'm guessing that maybe there was two parts of the project. The sealed head stuff that Paul and Greg are discussing and probably was IBM only, as I do not remember that Memorex ever produced a similar product to the 3340 . But maybe platter stuff may have been joint. But I have asked Russ a couple of times, and he very much states that name 'Project Winchester' came from the platter technology which was first introduced in the IBM 3330. I'll have to dig up the book Noel suggests and see what they see (and ask Russ what he thinks of it). ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180424/fdc7bba7/attachment.html> ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] 3330s, 3340s, Winchesters... (was: /dev/drum) 2018-04-24 13:13 ` Clem Cole @ 2018-04-25 0:52 ` Greg 'groggy' Lehey 2018-04-25 20:54 ` Paul Winalski 0 siblings, 1 reply; 59+ messages in thread From: Greg 'groggy' Lehey @ 2018-04-25 0:52 UTC (permalink / raw) On Tuesday, 24 April 2018 at 9:13:41 -0400, Clem Cole wrote: > On Tue, Apr 24, 2018 at 7:43 AM, Paul Winalski <paul.winalski at gmail.com> > wrote: > >> Clem claims that the code name refers to various advances in >> disk technology first released in the 3330's disk packs. Wikipedia >> and my own memory agree with you that Winchester referred to the 3340. > > ???Interesting ... My source was (is) my friend and former colleague from > Stellar, Russ Robelen, ???who was the HW lead for the 360/50 and the IBM ACS > systems. Russ said the original IBM project Winchester first begat the > platter (as Noel pointed out was so named because of the 30-30 capacity of > the original disk) - which showed up in the 3330 disks as well as the > sealed head stuff that Paul and Greg are talking about. Hmm. The earliest 3330s had 100 MB per disk, considerably more than the 3340. I had thought that the 3340 had fewer surfaces. And the 3330s definitely only had one disk per unit, though they brought out an 8-drive cabinet with a whopping 2.4 GB (by the time I used them). > What I never asked him, was where Memorex fit it. It was a somehow > a joint project with them, and they got at least some of the > technology -- in fact DEC's OEM for the RP05/RP06 was Memorex [the > big box of logic on the side of the DEC version was the Massbus to > IBM I/O logic converter]. It is also true that real lasting piece > of project Winchester was the embedded (sealed head) stuff that came > from the 3340. Yesterday I suggested that CDC might have used the same disk packs as the 3330, but I'm now very sure that the ones we used came from Ampex. > I should ask Russ what he remembers, on this. It would also be interesting to hear if he can shed any light on the Winchester Boulevard vs. 30/30 controversy. Greg -- Sent from my desktop computer. Finger grog at lemis.com for PGP public key. See complete headers for address and phone numbers. This message is digitally signed. If your Microsoft mail program reports problems, please read http://lemis.com/broken-MUA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: not available URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180425/d6b10ccb/attachment.sig> ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] 3330s, 3340s, Winchesters... (was: /dev/drum) 2018-04-25 0:52 ` Greg 'groggy' Lehey @ 2018-04-25 20:54 ` Paul Winalski 0 siblings, 0 replies; 59+ messages in thread From: Paul Winalski @ 2018-04-25 20:54 UTC (permalink / raw) On 4/24/18, Greg 'groggy' Lehey <grog at lemis.com> wrote: > > Hmm. The earliest 3330s had 100 MB per disk, considerably more than > the 3340. I had thought that the 3340 had fewer surfaces. And the > 3330s definitely only had one disk per unit, though they brought out > an 8-drive cabinet with a whopping 2.4 GB (by the time I used them). The 3340 indeed had fewer platters per unit than the 3330, and because of that a lower disk capacity. Both the 3330 and 3340 were CKD format, not fixed-block, so the capacity depended on the record size. Highest storage capacity was achieved with one record with a zero-length key field covering the full track (called full-track blocking). According to the IBM Archives web page (https://www-03.ibm.com/ibm/history/exhibits/storage/storage_3330.html), the 3330 was code-named Merlin. It could have from 2 to 16 spindles per controller. Originally each disk pack had a maximum capacity of 100 MB. The 3330 model 11 used IBM 3336 disk packs that had double the original capacity (up to 200 MB). This IBM Archives web page (https://www-03.ibm.com/ibm/history/exhibits/storage/storage_3340.html) says that the 3340 was code-named Winchester. This page reports, but does not verify, the "30-30" Winchester rifle story. The IBM 3348 Data Module, the disk pack equivalent for the 3340, was a sealed module that contained the head assembly. This reduced the hazards of head misalignment and surface contamination. Unlike later sealed-module disks, the 3348s were removable media. Modules with maximum capacities of 35 MB or 70 MB were available. There was also a 70 MB module with up to 0.5 MB accessible from fixed heads. -Paul W. ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-23 20:47 ` Grant Taylor 2018-04-23 21:06 ` Clem Cole @ 2018-04-23 22:07 ` Tim Bradshaw 2018-04-23 22:15 ` Warner Losh 2018-04-23 23:45 ` [TUHS] /dev/drum Arthur Krewat 1 sibling, 2 replies; 59+ messages in thread From: Tim Bradshaw @ 2018-04-23 22:07 UTC (permalink / raw) On 23 Apr 2018, at 21:47, Grant Taylor via TUHS <tuhs at minnie.tuhs.org> wrote: > I had always wondered where Solaris (SunOS) got it's use of the different slices, including the slice that was the entire disk from. > > Now I'm guessing Solaris got it from SunOS which got it from 4.x BSD. > There is a wonderful Sun cretinism about this. At some recent time (I am not sure how recent but probably mid 2000s), someone worked out that you wanted swap to be at one end of the disk (I think the outside) because on modern disks the data rate changes across the disk and you wanted it at the end with the highest data rate. But lots of things knew that swap was on s1, the second partition. So they changed the default installation tool so the slices of the disk were out of order: s1 was the first, s0 the second, s2 was the whole disk (which it already was) and so on. This was enormously entertaining in a bad way if you made the normal assumption that the slices were in order. There was also (either then or before) some magic needed such that swapping never touched the first n blocks of the disk where the label and boot blocks were, and it was possible to get this wrong so the machine would happily boot, run but would then fail to boot again, usually at a most inconvenient time. And the cretinism was that this was mid 2000s: if you had machine that was paging the answer was to buy more memory not to arrange for faster swap space: it was solving a problem that nobody had any more. ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-23 22:07 ` [TUHS] /dev/drum Tim Bradshaw @ 2018-04-23 22:15 ` Warner Losh 2018-04-23 23:30 ` Grant Taylor 2018-04-23 23:45 ` [TUHS] /dev/drum Arthur Krewat 1 sibling, 1 reply; 59+ messages in thread From: Warner Losh @ 2018-04-23 22:15 UTC (permalink / raw) On Mon, Apr 23, 2018 at 4:07 PM, Tim Bradshaw <tfb at tfeb.org> wrote: > On 23 Apr 2018, at 21:47, Grant Taylor via TUHS <tuhs at minnie.tuhs.org> > wrote: > > I had always wondered where Solaris (SunOS) got it's use of the > different slices, including the slice that was the entire disk from. > > > > Now I'm guessing Solaris got it from SunOS which got it from 4.x BSD. > > > > > There is a wonderful Sun cretinism about this. At some recent time (I am > not sure how recent but probably mid 2000s), someone worked out that you > wanted swap to be at one end of the disk (I think the outside) because on > modern disks the data rate changes across the disk and you wanted it at the > end with the highest data rate. But lots of things knew that swap was on > s1, the second partition. So they changed the default installation tool so > the slices of the disk were out of order: s1 was the first, s0 the second, > s2 was the whole disk (which it already was) and so on. This was > enormously entertaining in a bad way if you made the normal assumption that > the slices were in order. There was also (either then or before) some > magic needed such that swapping never touched the first n blocks of the > disk where the label and boot blocks were, and it was possible to get this > wrong so the machine would happily boot, run but would then fail to boot > again, usually at a most inconvenient time. > > And the cretinism was that this was mid 2000s: if you had machine that > was paging the answer was to buy more memory not to arrange for faster swap > space: it was solving a problem that nobody had any more. > It's weird. These days lower LBAs perform better on spinning drives. We're seeing about 1.5x better performance on the first 30% of a drive than on the last 30%, at least for read speeds for video streaming.... Warner -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180423/35d66b16/attachment.html> ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-23 22:15 ` Warner Losh @ 2018-04-23 23:30 ` Grant Taylor 2018-04-24 9:37 ` Michael Kjörling 2018-04-25 1:31 ` [TUHS] Disk data layout (was: /dev/drum) Greg 'groggy' Lehey 0 siblings, 2 replies; 59+ messages in thread From: Grant Taylor @ 2018-04-23 23:30 UTC (permalink / raw) On 04/23/2018 04:15 PM, Warner Losh wrote: > It's weird. These days lower LBAs perform better on spinning drives. > We're seeing about 1.5x better performance on the first 30% of a drive > than on the last 30%, at least for read speeds for video streaming.... I think manufacturers have switched things around on us. I'm used to higher LBA numbers being on the outside of the disk. But I've seen anecdotal indicators that the opposite is now true. -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3982 bytes Desc: S/MIME Cryptographic Signature URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180423/e754b7fc/attachment-0001.bin> ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-23 23:30 ` Grant Taylor @ 2018-04-24 9:37 ` Michael Kjörling 2018-04-24 9:57 ` Dave Horsfall 2018-04-24 13:03 ` Arthur Krewat 2018-04-25 1:31 ` [TUHS] Disk data layout (was: /dev/drum) Greg 'groggy' Lehey 1 sibling, 2 replies; 59+ messages in thread From: Michael Kjörling @ 2018-04-24 9:37 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 1964 bytes --] On 23 Apr 2018 17:30 -0600, from tuhs at minnie.tuhs.org (Grant Taylor via TUHS): > On 04/23/2018 04:15 PM, Warner Losh wrote: >> It's weird. These days lower LBAs perform better on spinning >> drives. We're seeing about 1.5x better performance on the first >> 30% of a drive than on the last 30%, at least for read speeds for >> video streaming.... > > I think manufacturers have switched things around on us. I'm used > to higher LBA numbers being on the outside of the disk. But I've > seen anecdotal indicators that the opposite is now true. I couldn't quite resist, so tried it out. Take this for what it is, an anecdote. Reading 10 GB in direct mode using dd with no skip at the beginning, in 1 MiB blocks, gives me about 190 MB/s on one of the Seagate SAS disks in my PC, and some 165 MB/s on one of the HGST SATA disks in the same PC. Obviously, that's for purely sequential I/O, with very little other I/O load. Doing the same with an initial skip of 3,500,000 blocks (these are 4 TB drives, so this puts the read toward the outer limit), I get 105 MB/s on the Seagate SAS and 100 MB/s on the HGST SATA. I did the same thing twice to make sure caching wasn't somehow interfering with the values. The differences for all reported transfer rates were marginal, and well within a reasonable margin of error. That's definitely statistically significantly slower toward the outer edge of the disk as presented by the OS. That _should_ translate to slower for higher LBAs, but with all the magic happening in modern systems, you might never know... Of course, back in ye olden days, even 100 MB/s would have been blazingly fast. Are we spoiled these days to think of throughputs on the order of a gigabit per second as slow? -- Michael Kjörling • https://michael.kjorling.se • michael at kjorling.se “The most dangerous thought that you can have as a creative person is to think you know what you’re doing.” (Bret Victor) ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-24 9:37 ` Michael Kjörling @ 2018-04-24 9:57 ` Dave Horsfall 2018-04-24 13:01 ` Nemo 2018-04-24 13:03 ` Arthur Krewat 1 sibling, 1 reply; 59+ messages in thread From: Dave Horsfall @ 2018-04-24 9:57 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 703 bytes --] On Tue, 24 Apr 2018, Michael Kjörling wrote: [ ... ] > That's definitely statistically significantly slower toward the outer > edge of the disk as presented by the OS. That _should_ translate to > slower for higher LBAs, but with all the magic happening in modern > systems, you might never know... Exactly; with disks these days it's pointless trying to second-guess what's happening under the bonnet (ObUS: hood). A well-known question in the security field, for instance, is how do you know that you have *really* erased that disk? The thing can lie as much as it likes, and you'll never know for sure. -- Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer." ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-24 9:57 ` Dave Horsfall @ 2018-04-24 13:01 ` Nemo 0 siblings, 0 replies; 59+ messages in thread From: Nemo @ 2018-04-24 13:01 UTC (permalink / raw) On 24 April 2018 at 05:57, Dave Horsfall <dave at horsfall.org> wrote: [...] > A well-known question in the security field, for instance, is how do you > know that you have *really* erased that disk? The thing can lie as much as > it likes, and you'll never know for sure. That reminds of an old mil-spec that told you how to dispose of disc-packs by grinding down the platters by hand, what grit of paper to use, which directions, and how often. (Exact number escapes me.) N. ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-24 9:37 ` Michael Kjörling 2018-04-24 9:57 ` Dave Horsfall @ 2018-04-24 13:03 ` Arthur Krewat 1 sibling, 0 replies; 59+ messages in thread From: Arthur Krewat @ 2018-04-24 13:03 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 877 bytes --] On 4/24/2018 5:37 AM, Michael Kjörling wrote: > > I couldn't quite resist, so tried it out. Take this for what it is, an > anecdote. > To make it even worse in terms of trying to figure out what a disk will do for (to) you in terms of performance... A new caching scheme on a set of 10TB drives I bought a while ago - they are HGST SAS drives. They incorporated a set of cylinders spaced across the disk that are "cache" cylinders. Wherever the heads are "right now", the closest cache cylinders will be used to write to. Once there's enough free time, it will go back and read those cache cylinders, and put the blocks where they are "supposed" to be. http://www.tomsitpro.com/articles/hgst-ultrastar-c15k600-hdd,2-906-3.html Those are not the drives I bought, but as far as I know, even their near-line SAS products have this feature, called NVC Quick Cache ak ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] Disk data layout (was: /dev/drum) 2018-04-23 23:30 ` Grant Taylor 2018-04-24 9:37 ` Michael Kjörling @ 2018-04-25 1:31 ` Greg 'groggy' Lehey 2018-04-25 6:43 ` Hellwig Geisse 2018-04-25 21:17 ` Paul Winalski 1 sibling, 2 replies; 59+ messages in thread From: Greg 'groggy' Lehey @ 2018-04-25 1:31 UTC (permalink / raw) On Monday, 23 April 2018 at 17:30:24 -0600, Grant Taylor via TUHS wrote: > On 04/23/2018 04:15 PM, Warner Losh wrote: >> It's weird. These days lower LBAs perform better on spinning >> drives. We're seeing about 1.5x better performance on the first >> 30% of a drive than on the last 30%, at least for read speeds for >> video streaming.... This relates to modern disks, of course, where there are different amounts of data on each track. Until about 1990 each track had a constant amount of data on it. > I think manufacturers have switched things around on us. I'm used > to higher LBA numbers being on the outside of the disk. But I've > seen anecdotal indicators that the opposite is now true. "LBA" is newer than the time we're talking of. In those days, disk data was addressed physically, by cylinder, head and sector, terms that only died out round the turn of the century. I was heavily involved in disk data recovery in the 1980s, and I know beyond any doubt that cylinder 0 was on the outside (closest to where the heads retracted before changing a pack). I was amazed when I discovered that CD-ROMs started counting from the inside. But how do I prove it? I've done some net trawling and looking through my pile of junk here, but I can't find anything convincing. http://www.datadoctor.biz/data_recovery_programming_book_chapter2-page16.html shows a layout like this, and also the statement The tracks are numbered, starting from 0, starting at the outside of the platter and increasing as you go in. But by itself that's not overly convincing. I looked at the manual for the IBM 3330, but I couldn't find anything specific. Does anybody else have any useful reference? Of course, you can map CHS to LBA in multiple ways, and conceivably it has been done differently. Greg -- Sent from my desktop computer. Finger grog at lemis.com for PGP public key. See complete headers for address and phone numbers. This message is digitally signed. If your Microsoft mail program reports problems, please read http://lemis.com/broken-MUA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: not available URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180425/e630b720/attachment.sig> ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] Disk data layout (was: /dev/drum) 2018-04-25 1:31 ` [TUHS] Disk data layout (was: /dev/drum) Greg 'groggy' Lehey @ 2018-04-25 6:43 ` Hellwig Geisse 2018-04-25 21:17 ` Paul Winalski 1 sibling, 0 replies; 59+ messages in thread From: Hellwig Geisse @ 2018-04-25 6:43 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 956 bytes --] On Mi, 2018-04-25 at 11:31 +1000, Greg 'groggy' Lehey wrote: > > But by itself that's not overly convincing. I looked at the manual > for the IBM 3330, but I couldn't find anything specific. Does anybody > else have any useful reference? > I kept the manual of the first hard disk drive that I ever integrated into a microcomputer system. This was a Shugart SA600 Fixed Disk Drive; the OEM manual is Copyright 1981. It had the (then) common 5.25" form factor, and could store 8 Mbytes on non-replaceable platters with 6 surfaces. Figure 2-4 in the manual shows the disk surface. "TRK 000" is the outermost track, "TRK 159" the innermost one. Even closer to the spindle is the "Head Shipping Zone" on "TRK 182", which could be accessed by a regular seek. Finally, section 4.4.1 explicitly states that the interface signal "Track 00" is true only when "the read/write heads of the selected drive are at track 00 (the outermost track)". Hellwig ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] Disk data layout (was: /dev/drum) 2018-04-25 1:31 ` [TUHS] Disk data layout (was: /dev/drum) Greg 'groggy' Lehey 2018-04-25 6:43 ` Hellwig Geisse @ 2018-04-25 21:17 ` Paul Winalski 2018-04-25 21:55 ` Bakul Shah 2018-04-27 15:47 ` Dave Horsfall 1 sibling, 2 replies; 59+ messages in thread From: Paul Winalski @ 2018-04-25 21:17 UTC (permalink / raw) On 4/24/18, Greg 'groggy' Lehey <grog at lemis.com> wrote: > > "LBA" is newer than the time we're talking of. In those days, disk > data was addressed physically, by cylinder, head and sector, terms > that only died out round the turn of the century. IBM DASD--Direct Access Storage Device, a term that encompassed drums, disks and the data cell drive--addressed data on the media physically by bin, cylinder, head, and record, as a hexadecimal number BBCCHHR. Bin number was zero except on the IBM 2321 data cell drive. CKD drives supported a variable number of records on each track, hence the term "record" rather than "sector". Logical block addressing (LBA) for sector-oriented disks allowed the OS (or later, the disk controller) to hide bad block replacement from programs. VAX/VMS used a protected system file ([sysexe]badblock.sys) to keep track of the bad blocks on each disk volume. DEC once got a customer bug report complaining that if a privileged user gave the command "TYPE SYS$SYSTEM:BADBLOCK.SYS", the console logged bad block errors. How does/did Unix handle bad block replacement? -Paul W. ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] Disk data layout (was: /dev/drum) 2018-04-25 21:17 ` Paul Winalski @ 2018-04-25 21:55 ` Bakul Shah 2018-04-27 15:47 ` Dave Horsfall 1 sibling, 0 replies; 59+ messages in thread From: Bakul Shah @ 2018-04-25 21:55 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 1309 bytes --] @Fortune Systems we kept a forwarding map of bad blocks. This was stored in a file at inode#1 IIRC. To read/write block n you check if n was in the list. If not, you read/write disk block n. Else you use the mapped block. The last N blocks were reserved for this. Actually we had to keep some state. A block that gave an error on read was marked bad so that further reads errored out. But a write to such a block would allocate a new block and add an entry in the bad block forwarding map. Further read/writes would then Go to this block. I also kept soft (correctable) error stats. These blocks were more likely to go bad so periodically a background task would preemptively forward these blocks. To test this logic I used to build the v7 kernel on a very unreliable disk! But this slowed things down a lot - having to seek to the end zone and then back for the next block. I think in later disk drives an extra block per track was reserved for this and the drive took care of this. It would’ve made more sense to avoid this forwarding by exposing bad blocks to the file system layer so that it can avoid allocating them but this would’ve complicated the interface. > On Apr 25, 2018, at 2:17 PM, Paul Winalski <paul.winalski at gmail.com> wrote: > > How does/did Unix handle bad block replacement? ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] Disk data layout (was: /dev/drum) 2018-04-25 21:17 ` Paul Winalski 2018-04-25 21:55 ` Bakul Shah @ 2018-04-27 15:47 ` Dave Horsfall 2018-04-27 18:16 ` Paul Winalski 1 sibling, 1 reply; 59+ messages in thread From: Dave Horsfall @ 2018-04-27 15:47 UTC (permalink / raw) On Wed, 25 Apr 2018, Paul Winalski wrote: > Bin number was zero except on the IBM 2321 data cell drive. CKD drives > supported a variable number of records on each track, hence the term > "record" rather than "sector". Would that have been the infamous chicken-plucker?[*] Wherein if things went OK, then they were OK, but if things went wrong... [*] Also known by, ahem, a name that rhymes... -- Dave Horsfall BSc DTM (VK2KFU) -- FuglySoft -- Gosford IT -- Unix/C/Perl (AbW) People who fail to / understand security / surely will suffer. (tks: RichardM) ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] Disk data layout (was: /dev/drum) 2018-04-27 15:47 ` Dave Horsfall @ 2018-04-27 18:16 ` Paul Winalski 2018-04-27 18:37 ` Clem Cole 0 siblings, 1 reply; 59+ messages in thread From: Paul Winalski @ 2018-04-27 18:16 UTC (permalink / raw) On 4/27/18, Dave Horsfall <dave at horsfall.org> wrote: > On Wed, 25 Apr 2018, Paul Winalski wrote: > >> Bin number was zero except on the IBM 2321 data cell drive. CKD drives >> supported a variable number of records on each track, hence the term >> "record" rather than "sector". > > Would that have been the infamous chicken-plucker?[*] Wherein if things > went OK, then they were OK, but if things went wrong... That's the one. I'd always heard "noodle picker", though. From the outside it looked like a drum in the shape of a multi-sided polygon. Each bin held a number of wide pieces of multitrack magnetic tape. To seek to your data, the drum rotated under a head that pulled the appropriate tape out of the bin and wrapped it on rotating drum. From there reading and writing was much like any other drum device. It's as if someone asked Rube Goldberg to design a disk drive. -Paul W. ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] Disk data layout (was: /dev/drum) 2018-04-27 18:16 ` Paul Winalski @ 2018-04-27 18:37 ` Clem Cole 0 siblings, 0 replies; 59+ messages in thread From: Clem Cole @ 2018-04-27 18:37 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 1371 bytes --] On Fri, Apr 27, 2018 at 2:16 PM, Paul Winalski <paul.winalski at gmail.com> wrote: > On 4/27/18, Dave Horsfall <dave at horsfall.org> wrote: > > On Wed, 25 Apr 2018, Paul Winalski wrote: > > > >> Bin number was zero except on the IBM 2321 data cell drive. CKD drives > >> supported a variable number of records on each track, hence the term > >> "record" rather than "sector". > > > > Would that have been the infamous chicken-plucker?[*] Wherein if things > > went OK, then they were OK, but if things went wrong... > > That's the one. I'd always heard "noodle picker", though. From the > outside it looked like a drum in the shape of a multi-sided polygon. > Each bin held a number of wide pieces of multitrack magnetic tape. To > seek to your data, the drum rotated under a head that pulled the > appropriate tape out of the bin and wrapped it on rotating drum. From > there reading and writing was much like any other drum device. It's > as if someone asked Rube Goldberg to design a disk drive. Accept that it will like tape in the head contacted the media. Question for Debbie S --- my memory is you folks had one up the hill at LBL. I think that's where I saw it in action. ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180427/226ab860/attachment.html> ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-23 22:07 ` [TUHS] /dev/drum Tim Bradshaw 2018-04-23 22:15 ` Warner Losh @ 2018-04-23 23:45 ` Arthur Krewat 2018-04-24 8:05 ` tfb 1 sibling, 1 reply; 59+ messages in thread From: Arthur Krewat @ 2018-04-23 23:45 UTC (permalink / raw) [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain, Size: 3997 bytes --] On 4/23/2018 6:07 PM, Tim Bradshaw wrote: > On 23 Apr 2018, at 21:47, Grant > And the cretinism was that this was mid 2000s: if you had machine that was paging the answer was to buy more memory not to arrange for faster swap space: it was solving a problem that nobody had any more. > > Wow, I was going to refute this, as I don't ever remember seeing any Solaris box do this. However, a Solaris 8 x86 vanilla install on VMware I did recently does indeed have swap on the first cylinders (see below). A Solaris 7 x86 install does not exhibit this behavior. Now I'm wondering if Solaris 9 does it. A Solaris 10 box I have access to has swap after root, not before. # format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d0 <DEFAULT cyl 22166 alt 2 hd 15 sec 63> /pci at 0,0/pci-ide at 7,1/ide at 0/cmdk at 0,0 Specify disk (enter its number): 0 selecting c0d0 Controller working list found [disk formatted, defect list found] Warning: Current Disk has mounted partitions. FORMAT MENU: disk - select a disk type - select (define) a disk type partition - select (define) a partition table current - describe the current disk format - format and analyze the disk fdisk - run the fdisk program repair - repair a defective sector show - translate a disk address label - write label to the disk analyze - surface analysis defect - defect list management backup - search for backup labels verify - read and display labels save - save new disk/partition definitions volname - set 8-character volume name !<cmd> - execute <cmd>, then return quit format> part PARTITION MENU: 0 - change `0' partition 1 - change `1' partition 2 - change `2' partition 3 - change `3' partition 4 - change `4' partition 5 - change `5' partition 6 - change `6' partition 7 - change `7' partition select - select a predefined table modify - modify a predefined partition table name - name the current table print - display the current table label - write partition map and label to the disk !<cmd> - execute <cmd>, then return quit partition> pri Current partition table (original): Total disk cylinders available: 22166 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 root wm 1113 - 3704 1.17GB (2592/0/0) 2449440 1 swap wu 3 - 1112 512.18MB (1110/0/0) 1048950 2 backup wm 0 - 22166 9.99GB (22167/0/0) 20947815 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 unassigned wm 0 0 (0/0/0) 0 7 home wm 3705 - 22166 8.32GB (18462/0/0) 17446590 8 boot wu 0 - 0 0.46MB (1/0/0) 945 9 alternates wu 1 - 2 0.92MB (2/0/0) 1890 ^ permalink raw reply [flat|nested] 59+ messages in thread
* [TUHS] /dev/drum 2018-04-23 23:45 ` [TUHS] /dev/drum Arthur Krewat @ 2018-04-24 8:05 ` tfb 0 siblings, 0 replies; 59+ messages in thread From: tfb @ 2018-04-24 8:05 UTC (permalink / raw) On 24 Apr 2018, at 00:45, Arthur Krewat <krewat at kilonet.net> wrote: > > Wow, I was going to refute this, as I don't ever remember seeing any Solaris box do this. However, a Solaris 8 x86 vanilla install on VMware I did recently does indeed have swap on the first cylinders (see below). A Solaris 7 x86 install does not exhibit this behavior. Now I'm wondering if Solaris 9 does it. A Solaris 10 box I have access to has swap after root, not before. It makes sense that it was 8. We hardly used 9 so I don't know what it did, but I suspect by 10 someone had had very strong words with whoever thought it was a good idea after some large customer had had an only-technically-their-fault outage because of it and been fierce at Sun. It was really stupidly dumb, I thought: solving a problem no-one had any more by the wrong method (if paging *was* a problem a better solution is to add a second slice on another physical disk as swap). -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180424/a106c464/attachment.html> ^ permalink raw reply [flat|nested] 59+ messages in thread
end of thread, other threads:[~2018-04-27 18:37 UTC | newest] Thread overview: 59+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2018-04-20 15:02 [TUHS] /dev/drum Tim Bradshaw 2018-04-20 15:58 ` Clem Cole 2018-04-20 16:00 ` David Collantes 2018-04-20 16:12 ` Dan Cross 2018-04-20 16:21 ` Clem Cole 2018-04-20 16:33 ` Warner Losh 2018-04-20 19:17 ` Ron Natalie 2018-04-20 20:23 ` Clem Cole 2018-04-20 22:10 ` Dave Horsfall 2018-04-22 17:01 ` Lars Brinkhoff 2018-04-22 17:37 ` Clem Cole 2018-04-22 19:14 ` Bakul Shah 2018-04-22 20:58 ` Mutiny 2018-04-22 22:37 ` Clem cole 2018-04-22 21:51 ` Dave Horsfall 2018-04-25 1:27 ` Dan Stromberg 2018-04-25 12:18 ` Ronald Natalie 2018-04-25 13:39 ` Tim Bradshaw 2018-04-25 14:02 ` arnold 2018-04-25 14:59 ` tfb 2018-04-25 14:33 ` Ian Zimmerman 2018-04-25 14:46 ` Larry McVoy 2018-04-25 15:03 ` ron minnich 2018-04-25 20:29 ` Paul Winalski 2018-04-25 20:45 ` Larry McVoy 2018-04-25 21:14 ` Lawrence Stewart 2018-04-25 21:30 ` ron minnich 2018-04-25 23:01 ` Bakul Shah 2018-04-23 16:42 ` Tim Bradshaw 2018-04-23 17:30 ` Ron Natalie 2018-04-23 17:51 ` Clem Cole 2018-04-23 18:30 ` Ron Natalie 2018-04-25 14:02 ` Tom Ivar Helbekkmo 2018-04-25 14:38 ` Clem Cole 2018-04-23 20:47 ` Grant Taylor 2018-04-23 21:06 ` Clem Cole 2018-04-23 21:14 ` Dan Mick 2018-04-23 21:27 ` Clem Cole 2018-04-24 3:28 ` [TUHS] 3330s, 3340s, Winchesters... (was: /dev/drum) Greg 'groggy' Lehey 2018-04-24 11:43 ` Paul Winalski 2018-04-24 13:13 ` Clem Cole 2018-04-25 0:52 ` Greg 'groggy' Lehey 2018-04-25 20:54 ` Paul Winalski 2018-04-23 22:07 ` [TUHS] /dev/drum Tim Bradshaw 2018-04-23 22:15 ` Warner Losh 2018-04-23 23:30 ` Grant Taylor 2018-04-24 9:37 ` Michael Kjörling 2018-04-24 9:57 ` Dave Horsfall 2018-04-24 13:01 ` Nemo 2018-04-24 13:03 ` Arthur Krewat 2018-04-25 1:31 ` [TUHS] Disk data layout (was: /dev/drum) Greg 'groggy' Lehey 2018-04-25 6:43 ` Hellwig Geisse 2018-04-25 21:17 ` Paul Winalski 2018-04-25 21:55 ` Bakul Shah 2018-04-27 15:47 ` Dave Horsfall 2018-04-27 18:16 ` Paul Winalski 2018-04-27 18:37 ` Clem Cole 2018-04-23 23:45 ` [TUHS] /dev/drum Arthur Krewat 2018-04-24 8:05 ` tfb
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).