On Tue, Jan 8, 2019 at 10:20 PM Bakul Shah <bakul@bitblocks.com> wrote:
On Wed, 09 Jan 2019 13:10:18 +1100 Dave Horsfall <dave@horsfall.org> wrote:
Dave Horsfall writes:
> On Tue, 8 Jan 2019, Warner Losh wrote:
>
> >       i understood that this implemented the elevator algorithm, and
> >       possible rotational latency compensation.
> >
> >
> > I know what it does. I want to know why that specific name was selected.
>
> Err, because as I replied in a previous message (did you not see it?), it
> was up to the programmer to implement an optimal strategy to access the
> sectors, depending upon the device?  I'm not being snarky, but it seems
> like an obvious choice (if not a hint) to me...
>
> Let's see, I need a strategy to optimise access, taking into account
> seek and rotational latency.  I know!  I'll call it XXstrategy()!

I too wondered about "strategy" in 1981 when I first worked on
unix disk drivers. My guess then was something similar.
Anything other than a FIFO strategy would improve throughput
or fairness or avoid starvation but was much more complex due
to the fact you can't reoder reads & writes.  My guess is the
original author who named it strategy had this in mind.

Speaking of origins, it's in the V4 nsys (pre V5) sources we have. It's not in the V1 or V0 sources we have. I could find no comments in the V4 stuff or the V5 stuff to give its origin tale.
 
But these are not exactly dependent on the vagaries of
individual disk drivers.  At some point I factoroued out code
so that each specific disk driver took about 1KB of code and
shared about 7KB of common code.

Yes. Many OSes have factored this out so the code can be shared among the (now wide variety) of devices out there, as well as tuned for different work loads.
 
> For example, I could envisage a disk where the sectors are deliberately
> not numbered sequentially i.e. they've taken rotational latency into
> account for you?

We did in fact use an interleave factor of more than 1 (skip
more than 1 block for consecutively numbered sectors) to
improve throughput but that had to do with slow processing.
We did discuss "dead reckoning" (invoking the service routine
right when the N+1 numbered sector was near the r/w heads) but
I don't think we implemented it.

 For floppy drivers that I've seen the source to in early unixes, this was often the case. One minor device would be to access the 'raw' device, while another would be to access the 'cooked' sector numbers where the mapping was anything but linear. you'd have an interleave of, say, 4 or so, and then a 'slip' from track to track. The interleave factor was based on how much time it took for (a) the disk to rotate and (b) for the OS to process the sector (plus a little) and the 'slip' from track to track was based on rotational time and seek time so that when you were doing a sequential workload the first logical sector in the next track would quickly be under the read head... All of this was done in the floppy driver.

But there were some drawbacks to this that became clear as a number of different technologies to access the drive were developed. When you have 6 different floppy controllers, it makes sense to abstract out the bits that are common to all floppies. When you have 200 disk controllers in 20 different drivers, it makes sense to add a disk layer in there. Add to that partitioning schemes that are nested, and you quickly realize you don't want to implement all that in every driver... One of the brilliant decisions in Unix was to leave all this to the driver and not have the OS get in the way.... But it was also one that no modern unix or unix-like system still does due to the extreme diversity of block devices in the market place today. There are thousands, where the original unix had maybe a half dozen different ones if you squinted just right... At least disksort() was abstracted out into a library..

Warner