The Unix Heritage Society mailing list
 help / color / mirror / Atom feed
From: Dan Cross <>
To: Paul Ruizendaal <>
Cc: The Unix Historical Society <>
Subject: [TUHS] Re: Porting the SysIII kernel: boot, config & device drivers
Date: Sat, 31 Dec 2022 16:41:38 -0500	[thread overview]
Message-ID: <> (raw)
In-Reply-To: <>

On Sat, Dec 31, 2022 at 3:02 PM Paul Ruizendaal <> wrote:
> > On 31 Dec 2022, at 15:59, Dan Cross <> wrote:
> > I think that BSD supported autoconfiguration on the VAX well before
> > OpenBoot; the OpenBSD man page says it dates from 4.1 (1981) and was
> > revamped in 4.4.
> That is interesting. Are you referring to this:
> Or is auto configuration something else?

Warner answered this better than I could. :-)

> > SBI was/is an attempt to define a common interface to different
> > devices using RISC-V CPUs, but it's growing wildly and tending more
> > towards UEFI+ACPI than something like OpenBoot, which was much
> > simpler.
> >
> > In general, the idea of a BIOS isn't terrible: provide an interface
> > that decouples the OS and hardware. But in practice, no one has come
> > up with a particularly good set of interfaces yet. Ironically,
> > BSD-style autoconfig might have been the best yet.
> When it comes to abstracting devices, it would seem to me that virtio has gotten quite some traction in the last decade.

Virtio is solving a very, very different problem: this is limited
paravirtualization of devices for virtual machines. The idea of virtio
devices is predicated on a hypervisor somewhere providing a matching
implementation that, in turn, plugs into whatever primitives the host
provides for device management. Put another way, devices don't expose
virtio interfaces directly (generally speaking), and you need
something somewhere to translate from the virtio interfaces a guest
sees to actual useful services that the guest makes use of via those

A host _could_ emulate something like an actual physical hardware
device (for some category of device) but it turns out that's often not
particularly efficient; for instance virtio-net tends to be somewhat
faster than an emulated e1000. Still, virtio has some problems; you
can program DMA to anywhere in the guest physical address space, which
can be expensive to handle without an exit from the guest context:
virtio-net isn't as fast as Google's gvnic.

> Looking at both the systems of 40 years ago and the Risc-V SoC’s of last year, I could imagine something like:
> - A simple SBI, like the Berkeley Boot Loader (BBL) without the proxy kernel & host interface

This, I think, is really the hard part. The issue is that systems are
configured a myriad of different ways depending on the platform,
vendor, particular combination of IO devices, etc. That makes figuring
out where the host OS image is and getting it loaded into memory a
real pain in the booty; not to mention on modern systems where you've
got to do memory training as part of initializing DRAM controllers and
the like. Not to mention power-sequencing, initializing IO buses and
all that business: who assigns PCI BARs? What if the kernel is on a
PCI device, like an NVMe SSD or similar? Turning on a modern CPU is
really a non-trivial exercise, let alone getting the OS loaded. Once
you get it into memory, though, _starting_ the kernel is comparatively
easy. But all the gritty details before that are how we ended up with
enormous systems like UEFI.

> - Devices presented to the OS as virtio devices

But backed by what?

> - MMU abstraction somewhat similar in idea to that in SYSV/68 [1]

It's not clear to me what this would buy you; the MMU tends to be
architecturally defined, in a way that e.g. the boot process, IO bus
initialization, and even CPU topology discovery are not. And then
there are soft-TLBs (which are a bad idea, but that's a separate
discussion). I mean, most VM systems since SunOS 4 have a HAL and a
portable part already.

> Together this might be a usable Unix BIOS that could have worked in the early 80’s. One could also think of it as a simple hypervisor for only one client.

I believe that AIX on the PC/RT did basically this. Certainly, this
was true on the 370. The PALcode stuff on the Alpha was similar in
terms of having a BIOS-like abstraction with specific implementations
for Windows or VMS/Unix.

But there's an argument that this is functionality that _belongs_ in
the OS, which was the traditional Unix approach. Could it have been
done differently? I guess so, but would that have been a good system
design? I don't know.

> The remaining BBL functionality is not all that different from the content in mch.s on 16-bit Unix (e.g. floating point emulation for CPU’s that don’t have it in hardware). A virtio device is not all that different from the interface presented by e.g. PDP-11 ethernet devices (e.g. DELUA), the MMU abstraction was contemporary.
> High end systems could have had device hardware that implemented the virtio interface directly,

This is challenging with virtio because some devices basically rely on
the guest trapping into the host with a particular exit code to kick
off IO. I suppose you could do that with a shared semaphore, but then
you're burning cycles somewhere polling. You could interrupt yourself,
either synchronously or with a self-IPI, but then the interrupt
handler has to run the virtio handling code. In any event, it's no
longer a simple matter of writing to a device register somewhere to
make the "device" actually do something. Sadly, that's pretty baked
into the interface.

> low end systems could use simpler hardware and use the BIOS to present this interface via software.

But you've got to have some CPU run that software somewhere; either
you've got a satellite processor doing it or you're stealing cycles
from the OS. The latter is how a BIOS traditionally works and I
suspect that if we were doing this in the 1980s, having a dedicated IO
processor would have added non-trivial cost to a system, for little
gain: let's remember that Unix was _very_ success on a variety of
systems in that era without relying on a BIOS-type abstraction!

> [1] From mmu.h in the SYSV/68 source tree:
> /*
>         @(#)mmu.h       2.26
>         This is the header file for the UNIX V/68 generic
>         MMU interface. This provides the information that
>         is used by the various routines that call:
>         mmufork ()
>         mmuexec ()
>         mmuexit ()
>         mmuread ()
>         mmuwrite ()
>         mmuattach ()
>         mmudetach ()
>         mmuregs ()
>         mmualloc ()
>         mmuinit ()
>         mmuint ()
>         The above routines and secondary routines called
>         by them are contained in io/mmu1.c and io/mmu2.c.
> */

  parent reply	other threads:[~2022-12-31 21:43 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-30 18:25 [TUHS] " Paul Ruizendaal
2022-12-30 18:56 ` [TUHS] " Steve Nickolas
2022-12-31 14:59 ` Dan Cross
2022-12-31 19:08   ` Clem Cole
2022-12-31 21:10     ` Dan Cross
2022-12-31 21:39       ` Clem Cole
2022-12-31 21:52         ` Dan Cross
2022-12-31 23:25         ` Dave Horsfall
2023-01-01  1:02           ` Rob Pike
2023-01-01  1:16             ` George Michaelson
2023-01-01  1:40               ` Larry McVoy
2023-01-01  2:29                 ` Warner Losh
2023-01-01  1:24             ` Larry McVoy
2022-12-31 22:38       ` Theodore Ts'o
2022-12-31 22:55         ` Marc Donner
2023-01-01  3:55         ` Dan Cross
2023-01-01 20:29         ` Paul Ruizendaal
2023-01-01 21:26           ` G. Branden Robinson
2023-01-01 21:31             ` Rob Pike
2022-12-31 21:11     ` Paul Ruizendaal
2022-12-31 20:02   ` Paul Ruizendaal
2022-12-31 21:04     ` Warner Losh
2022-12-31 21:41     ` Dan Cross [this message]
2023-01-01  3:08     ` Warner Losh
2023-01-01  4:40       ` Dan Cross
2023-01-01  8:05     ` Jonathan Gray

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='' \ \ \ \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).