The Unix Heritage Society mailing list
 help / Atom feed
* Re: [TUHS] TUHS Digest, Vol 38, Issue 10
       [not found] <mailman.5.1546912802.21858.tuhs@minnie.tuhs.org>
@ 2019-01-08 10:15 ` Steve Simon
  2019-01-08 11:00   ` Andreas Kusalananda Kähäri
  2019-01-08 14:58   ` Warner Losh
  0 siblings, 2 replies; 9+ messages in thread
From: Steve Simon @ 2019-01-08 10:15 UTC (permalink / raw)
  To: tuhs


re disk stagey
i understood that this implemented the elevator algorithm, and possible rotational latency compensation.

re non-gcc compilers
there was a time in the early 2000s when some people tried release plan9’s (ken’s) c compiler for use in BSD bsd, sadly (for plan9) this didn't happen. pcc was reanimated instead.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [TUHS] TUHS Digest, Vol 38, Issue 10
  2019-01-08 10:15 ` [TUHS] TUHS Digest, Vol 38, Issue 10 Steve Simon
@ 2019-01-08 11:00   ` Andreas Kusalananda Kähäri
  2019-01-08 14:58   ` Warner Losh
  1 sibling, 0 replies; 9+ messages in thread
From: Andreas Kusalananda Kähäri @ 2019-01-08 11:00 UTC (permalink / raw)
  To: Steve Simon; +Cc: tuhs

On Tue, Jan 08, 2019 at 10:15:06AM +0000, Steve Simon wrote:
> 
> re disk stagey
> i understood that this implemented the elevator algorithm, and possible rotational latency compensation.
> 
> re non-gcc compilers
> there was a time in the early 2000s when some people tried release plan9’s (ken’s) c compiler for use in BSD bsd, sadly (for plan9) this didn't happen. pcc was reanimated instead.

It is possible that this is related to that second point?

    http://gsoc.cat-v.org/projects/kencc/


-- 
Andreas Kusalananda Kähäri,
National Bioinformatics Infrastructure Sweden (NBIS),
Uppsala University, Sweden.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [TUHS] TUHS Digest, Vol 38, Issue 10
  2019-01-08 10:15 ` [TUHS] TUHS Digest, Vol 38, Issue 10 Steve Simon
  2019-01-08 11:00   ` Andreas Kusalananda Kähäri
@ 2019-01-08 14:58   ` Warner Losh
  2019-01-09  2:10     ` Dave Horsfall
  1 sibling, 1 reply; 9+ messages in thread
From: Warner Losh @ 2019-01-08 14:58 UTC (permalink / raw)
  To: Steve Simon; +Cc: TUHS main list

[-- Attachment #1: Type: text/plain, Size: 284 bytes --]

On Tue, Jan 8, 2019 at 3:44 AM Steve Simon <steve@quintile.net> wrote:

>
> re disk stagey
> i understood that this implemented the elevator algorithm, and possible
> rotational latency compensation.
>

I know what it does. I want to know why that specific name was selected.

Warner

[-- Attachment #2: Type: text/html, Size: 622 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [TUHS] TUHS Digest, Vol 38, Issue 10
  2019-01-08 14:58   ` Warner Losh
@ 2019-01-09  2:10     ` Dave Horsfall
  2019-01-09  4:23       ` Warner Losh
  2019-01-09  5:19       ` Bakul Shah
  0 siblings, 2 replies; 9+ messages in thread
From: Dave Horsfall @ 2019-01-09  2:10 UTC (permalink / raw)
  To: The Eunuchs Hysterical Society

On Tue, 8 Jan 2019, Warner Losh wrote:

>       i understood that this implemented the elevator algorithm, and
>       possible rotational latency compensation.
> 
> 
> I know what it does. I want to know why that specific name was selected.

Err, because as I replied in a previous message (did you not see it?), it 
was up to the programmer to implement an optimal strategy to access the 
sectors, depending upon the device?  I'm not being snarky, but it seems 
like an obvious choice (if not a hint) to me...

Let's see, I need a strategy to optimise access, taking into account
seek and rotational latency.  I know!  I'll call it XXstrategy()!

For example, I could envisage a disk where the sectors are deliberately 
not numbered sequentially i.e. they've taken rotational latency into 
account for you?

Out of interest, what would you have called it?  XXaccess(), perhaps?

-- Dave

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [TUHS] TUHS Digest, Vol 38, Issue 10
  2019-01-09  2:10     ` Dave Horsfall
@ 2019-01-09  4:23       ` Warner Losh
  2019-01-09 19:13         ` Christian Barthel
  2019-01-09  5:19       ` Bakul Shah
  1 sibling, 1 reply; 9+ messages in thread
From: Warner Losh @ 2019-01-09  4:23 UTC (permalink / raw)
  To: Dave Horsfall; +Cc: The Eunuchs Hysterical Society

[-- Attachment #1: Type: text/plain, Size: 1540 bytes --]

On Tue, Jan 8, 2019, 7:11 PM Dave Horsfall <dave@horsfall.org wrote:

> On Tue, 8 Jan 2019, Warner Losh wrote:
>
> >       i understood that this implemented the elevator algorithm, and
> >       possible rotational latency compensation.
> >
> >
> > I know what it does. I want to know why that specific name was selected.
>
> Err, because as I replied in a previous message (did you not see it?), it
> was up to the programmer to implement an optimal strategy to access the
> sectors, depending upon the device?  I'm not being snarky, but it seems
> like an obvious choice (if not a hint) to me...
>
> Let's see, I need a strategy to optimise access, taking into account
> seek and rotational latency.  I know!  I'll call it XXstrategy()!
>
> For example, I could envisage a disk where the sectors are deliberately
> not numbered sequentially i.e. they've taken rotational latency into
> account for you?
>
> Out of interest, what would you have called it?  XXaccess(), perhaps?
>

The name seems obvious because I've seen it for the last 30 years. But I've
not seen it used elsewhere, nor have I seen it documented except in
relationship to Unix. It could have been called blkio or bufio or bio or
even just work or morework and still been as meaningful. VMS uses the FDT
table to process the IRPs sent down. RT-11 has a series of entry points
that have boring names. Other systems have a start routine (though more
often that is a common routine used by both the queue me and isr
functions). There is a wide diversity here...

Warner

>

[-- Attachment #2: Type: text/html, Size: 2208 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [TUHS] TUHS Digest, Vol 38, Issue 10
  2019-01-09  2:10     ` Dave Horsfall
  2019-01-09  4:23       ` Warner Losh
@ 2019-01-09  5:19       ` Bakul Shah
  2019-01-09  5:45         ` Warner Losh
  1 sibling, 1 reply; 9+ messages in thread
From: Bakul Shah @ 2019-01-09  5:19 UTC (permalink / raw)
  To: Dave Horsfall; +Cc: The Eunuchs Hysterical Society

On Wed, 09 Jan 2019 13:10:18 +1100 Dave Horsfall <dave@horsfall.org> wrote:
Dave Horsfall writes:
> On Tue, 8 Jan 2019, Warner Losh wrote:
>
> >       i understood that this implemented the elevator algorithm, and
> >       possible rotational latency compensation.
> > 
> > 
> > I know what it does. I want to know why that specific name was selected.
>
> Err, because as I replied in a previous message (did you not see it?), it 
> was up to the programmer to implement an optimal strategy to access the 
> sectors, depending upon the device?  I'm not being snarky, but it seems 
> like an obvious choice (if not a hint) to me...
>
> Let's see, I need a strategy to optimise access, taking into account
> seek and rotational latency.  I know!  I'll call it XXstrategy()!

I too wondered about "strategy" in 1981 when I first worked on
unix disk drivers. My guess then was something similar.
Anything other than a FIFO strategy would improve throughput
or fairness or avoid starvation but was much more complex due
to the fact you can't reoder reads & writes.  My guess is the
original author who named it strategy had this in mind.

But these are not exactly dependent on the vagaries of
individual disk drivers.  At some point I factoroued out code
so that each specific disk driver took about 1KB of code and
shared about 7KB of common code.

> For example, I could envisage a disk where the sectors are deliberately 
> not numbered sequentially i.e. they've taken rotational latency into 
> account for you?

We did in fact use an interleave factor of more than 1 (skip
more than 1 block for consecutively numbered sectors) to
improve throughput but that had to do with slow processing.
We did discuss "dead reckoning" (invoking the service routine
right when the N+1 numbered sector was near the r/w heads) but
I don't think we implemented it.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [TUHS] TUHS Digest, Vol 38, Issue 10
  2019-01-09  5:19       ` Bakul Shah
@ 2019-01-09  5:45         ` Warner Losh
  2019-01-09 15:09           ` Bakul Shah
  0 siblings, 1 reply; 9+ messages in thread
From: Warner Losh @ 2019-01-09  5:45 UTC (permalink / raw)
  To: Bakul Shah; +Cc: The Eunuchs Hysterical Society

[-- Attachment #1: Type: text/plain, Size: 4002 bytes --]

On Tue, Jan 8, 2019 at 10:20 PM Bakul Shah <bakul@bitblocks.com> wrote:

> On Wed, 09 Jan 2019 13:10:18 +1100 Dave Horsfall <dave@horsfall.org>
> wrote:
> Dave Horsfall writes:
> > On Tue, 8 Jan 2019, Warner Losh wrote:
> >
> > >       i understood that this implemented the elevator algorithm, and
> > >       possible rotational latency compensation.
> > >
> > >
> > > I know what it does. I want to know why that specific name was
> selected.
> >
> > Err, because as I replied in a previous message (did you not see it?),
> it
> > was up to the programmer to implement an optimal strategy to access the
> > sectors, depending upon the device?  I'm not being snarky, but it seems
> > like an obvious choice (if not a hint) to me...
> >
> > Let's see, I need a strategy to optimise access, taking into account
> > seek and rotational latency.  I know!  I'll call it XXstrategy()!
>
> I too wondered about "strategy" in 1981 when I first worked on
> unix disk drivers. My guess then was something similar.
> Anything other than a FIFO strategy would improve throughput
> or fairness or avoid starvation but was much more complex due
> to the fact you can't reoder reads & writes.  My guess is the
> original author who named it strategy had this in mind.
>

Speaking of origins, it's in the V4 nsys (pre V5) sources we have. It's not
in the V1 or V0 sources we have. I could find no comments in the V4 stuff
or the V5 stuff to give its origin tale.


> But these are not exactly dependent on the vagaries of
> individual disk drivers.  At some point I factoroued out code
> so that each specific disk driver took about 1KB of code and
> shared about 7KB of common code.
>

Yes. Many OSes have factored this out so the code can be shared among the
(now wide variety) of devices out there, as well as tuned for different
work loads.


> > For example, I could envisage a disk where the sectors are deliberately
> > not numbered sequentially i.e. they've taken rotational latency into
> > account for you?
>
> We did in fact use an interleave factor of more than 1 (skip
> more than 1 block for consecutively numbered sectors) to
> improve throughput but that had to do with slow processing.
> We did discuss "dead reckoning" (invoking the service routine
> right when the N+1 numbered sector was near the r/w heads) but
> I don't think we implemented it.
>

 For floppy drivers that I've seen the source to in early unixes, this was
often the case. One minor device would be to access the 'raw' device, while
another would be to access the 'cooked' sector numbers where the mapping
was anything but linear. you'd have an interleave of, say, 4 or so, and
then a 'slip' from track to track. The interleave factor was based on how
much time it took for (a) the disk to rotate and (b) for the OS to process
the sector (plus a little) and the 'slip' from track to track was based on
rotational time and seek time so that when you were doing a sequential
workload the first logical sector in the next track would quickly be under
the read head... All of this was done in the floppy driver.

But there were some drawbacks to this that became clear as a number of
different technologies to access the drive were developed. When you have 6
different floppy controllers, it makes sense to abstract out the bits that
are common to all floppies. When you have 200 disk controllers in 20
different drivers, it makes sense to add a disk layer in there. Add to that
partitioning schemes that are nested, and you quickly realize you don't
want to implement all that in every driver... One of the brilliant
decisions in Unix was to leave all this to the driver and not have the OS
get in the way.... But it was also one that no modern unix or unix-like
system still does due to the extreme diversity of block devices in the
market place today. There are thousands, where the original unix had maybe
a half dozen different ones if you squinted just right... At least
disksort() was abstracted out into a library..

Warner

[-- Attachment #2: Type: text/html, Size: 5052 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [TUHS] TUHS Digest, Vol 38, Issue 10
  2019-01-09  5:45         ` Warner Losh
@ 2019-01-09 15:09           ` Bakul Shah
  0 siblings, 0 replies; 9+ messages in thread
From: Bakul Shah @ 2019-01-09 15:09 UTC (permalink / raw)
  To: Warner Losh; +Cc: The Eunuchs Hysterical Society

On Tue, 08 Jan 2019 22:45:31 -0700 Warner Losh <imp@bsdimp.com> wrote:
>
>
> > > For example, I could envisage a disk where the sectors are deliberately
> > > not numbered sequentially i.e. they've taken rotational latency into
> > > account for you?
> >
> > We did in fact use an interleave factor of more than 1 (skip
> > more than 1 block for consecutively numbered sectors) to
> > improve throughput but that had to do with slow processing.
> > We did discuss "dead reckoning" (invoking the service routine
> > right when the N+1 numbered sector was near the r/w heads) but
> > I don't think we implemented it.
> >
>
>  For floppy drivers that I've seen the source to in early unixes, this was
> often the case. One minor device would be to access the 'raw' device, while
> another would be to access the 'cooked' sector numbers where the mapping
> was anything but linear. you'd have an interleave of, say, 4 or so, and
> then a 'slip' from track to track. The interleave factor was based on how

We used interleaving on the hard disk because a 5Mbps ST412
drive could stream data faster than typical user program could
handle (on a 5.6Mhz bus machine). We used h/w support as the
machine was already too slow to do any s/w interleaving!

Example: for an interleave of 1, at the time formatting the
disk, sector ids would be written in this sequence:
 1 8 2 9 3 A 4 B 5 C 6 D 7 E
We picked the interleave number based on some typical use
cases at the time.

The floppy driver was was a completely separate driver for
various reasons.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [TUHS] TUHS Digest, Vol 38, Issue 10
  2019-01-09  4:23       ` Warner Losh
@ 2019-01-09 19:13         ` Christian Barthel
  0 siblings, 0 replies; 9+ messages in thread
From: Christian Barthel @ 2019-01-09 19:13 UTC (permalink / raw)
  To: Warner Losh; +Cc: The Eunuchs Hysterical Society

Warner Losh <imp@bsdimp.com> writes:

> The name seems obvious because I've seen it for the last 30 years. But I've
> not seen it used elsewhere, nor have I seen it documented except in
> relationship to Unix. [....]

It is also used by Erich Gamma's famous book "Design Pattern": The
Strategy Pattern.  Is it somehow related to Unix (due to the reason that
the Unix device interface is designed in an object oriented fashion)?

Apart from that, U. Vahalia describes the strategy entry point in the 
book "UNIX Internals: The new Frontiers" (ISBN 978-0131019089) as:

| d_strategy() - Common point for read and write requests to a block
| device.  So named since the driver may use some strategy to reorder
| pending requests to optimize performance.  Operates asynchronously - if
| the device is busy, the routine merely queues the request and returns.
| When the I/O completes, the interrupt handler will dequeue the next
| request and start the next I/O. 

-- 
Christian Barthel

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, back to index

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <mailman.5.1546912802.21858.tuhs@minnie.tuhs.org>
2019-01-08 10:15 ` [TUHS] TUHS Digest, Vol 38, Issue 10 Steve Simon
2019-01-08 11:00   ` Andreas Kusalananda Kähäri
2019-01-08 14:58   ` Warner Losh
2019-01-09  2:10     ` Dave Horsfall
2019-01-09  4:23       ` Warner Losh
2019-01-09 19:13         ` Christian Barthel
2019-01-09  5:19       ` Bakul Shah
2019-01-09  5:45         ` Warner Losh
2019-01-09 15:09           ` Bakul Shah

The Unix Heritage Society mailing list

Archives are clonable: git clone --mirror http://inbox.vuxu.org/tuhs

Newsgroup available over NNTP:
	nntp://inbox.vuxu.org/vuxu.archive.tuhs


AGPL code for this site: git clone https://public-inbox.org/ public-inbox