[-- Attachment #1: Type: text/plain, Size: 3273 bytes --] All, I was introduced to Unix in the mid 1990's through my wife's VMS account at UT Arlington, where they had a portal to the WWW. I was able to download Slackware with the 0.9 kernel on 11 floppies including X11. I installed this on my system at the time - either a DEC Rainbow 100B? or a handme down generic PC. A few years later at Western Illinois University - they had some Sun Workstations there and I loved working with them. It would be several years later, though, that I would actually use unix in a work setting - 1998. I don't even remember what brand of unix, but I think it was again, sun, though no gui, so not as much love. Still, I was able to use rcs and and when my Windows bound buddies lost a week's work because of some snafu with their backups, I didn't lose anything - jackflash was the name of the server - good memories :). However, after this it was all DOS and Windows until, 2005. I'd been eyeing Macs for some time. I like the visual aesthetics and obvious design considerations. But, in 2005, I finally had a bonus big enough to actually buy one. I bought a G5 24" iMac and fell in love with Mac. Next, it was a 15" G4 Powerbook. I loved those Macs until Intel came around and then it was game over, no more PC's in my life (not really, but emotionally, this was how I felt). With Mac going intel, I could dual boot into Windows, Triple boot into Linux, and Quadruple boot into FreeBSD, and I could ditch Fink and finally manage my unix tools properly (arguable, I know) with Homebrew or MacPorts (lately, I've gone back to MacPorts due to Homebrew's lack of support for older OS versions, and for MacPorts seeming rationality). Anyhow, I have thoroughly enjoyed the Mac ride, but with Catalina, the ride got really bumpy (too much phone home, no more 32 bit programs and since Adobe Acrobat X, which I own, outright, isn't 64 bit, among other apps, this just in not an option for me), and with Big Sur, it's gotten worse, potholes, sinkholes, and suchlike, and the interface is downright patronizing (remember Microsoft Bob?). So, here I am, Mr. Run-Any-Cutting-Edge-OS anytime guy, hanging on tooth and nail to Mac OS Mojave where I still have a modicum of control over my environment. My thought for the day and question for the group is... It seems that the options for a free operating system (free as in freedom) are becoming ever more limited - Microsoft, this week, announced that their Edge update will remove Edge Legacy and IE while doing the update - nuts; Mac's desktop is turning into IOS - ew, ick; and Linux is wild west meets dictatorship and major corporations are moving in to set their direction (Microsoft, Oracle, IBM, etc.). FreeBSD we've beat to death over the last couple of weeks, so I'll leave it out of the mix for now. What in our unix past speaks to the current circumstance and what do those of you who lived those events see as possibilities for the next revolution - and, will unix be part of it? And a bonus question, why, oh why, can't we have a contained kernel that provides minimal functionality (dare I say microkernel), that is securable, and layers above it that other stuff (everything else) can run on with auditing and suchlike for traceability? [-- Attachment #2: Type: text/html, Size: 3855 bytes --]
On Mon, Feb 08, 2021 at 12:11:08PM -0600, Will Senn wrote:
> And a bonus question, why, oh why, can't we have a contained kernel that
> provides minimal functionality (dare I say microkernel), that is securable,
> and layers above it that other stuff (everything else) can run on with
> auditing and suchlike for traceability?
I can answer the microkernel question I think. It's discipline.
The only microkernel I ever liked was QNX and I liked it because it was
a MICROkernel. The entire kernel easily fit in a 4K instruction cache.
The only way that worked was discipline. There were 3 guys who could
touch the kernel, one of them, Dan Hildebrandt, was sort of a friend
of mine, we could, and did, have conversations about the benefits of a
monokernel vs a microkernel. He agreed with me that QNX only worked
because those 3 guys were really careful about what went into the
kernel. There was none of this "Oh, I measured performance and it is
only 1.5% slower now" nonsense, that's death by a hundred paper cuts.
Instead, every change came with before and after cache miss counts
under a benchmark. Stuff that increased the cache misses was heavily
frowned upon.
Most teams don't have that sort of discipline. They say they do,
they think they do, but when marketing says we have to do $WHATEVER,
it goes in.
[-- Attachment #1: Type: text/plain, Size: 2159 bytes --] On Mon, Feb 8, 2021 at 10:22 AM Larry McVoy <lm@mcvoy.com> wrote: > On Mon, Feb 08, 2021 at 12:11:08PM -0600, Will Senn wrote: > > And a bonus question, why, oh why, can't we have a contained kernel that > > provides minimal functionality (dare I say microkernel), that is > securable, > > and layers above it that other stuff (everything else) can run on with > > auditing and suchlike for traceability? > > I can answer the microkernel question I think. It's discipline. > The only microkernel I ever liked was QNX and I liked it because it was > a MICROkernel. The entire kernel easily fit in a 4K instruction cache. > > The only way that worked was discipline. There were 3 guys who could > touch the kernel, one of them, Dan Hildebrandt, was sort of a friend > of mine, we could, and did, have conversations about the benefits of a > monokernel vs a microkernel. He agreed with me that QNX only worked > because those 3 guys were really careful about what went into the > kernel. There was none of this "Oh, I measured performance and it is > only 1.5% slower now" nonsense, that's death by a hundred paper cuts. > Instead, every change came with before and after cache miss counts > under a benchmark. Stuff that increased the cache misses was heavily > frowned upon. > > Most teams don't have that sort of discipline. They say they do, > they think they do, but when marketing says we have to do $WHATEVER, > it goes in. > This describes pretty much every project I've ever worked on. It starts small, with a manageable feature set and a clean and performant codebase and then succumbs to external pressure for features and slowly bloats. If the features prove useful then the project will live on of course (and those features may well be the reason the project lives on), but at some point the bloat and techdebt become the dominant development story. My question then is, are there any examples of projects that maintained discipline, focus and relevance over years/decades that serve as counter examples to the above statement(s)? OpenBSD? Go? Is there anything to learn here? -Justin -- +1 (858) 230-1436 jqcoffey@gmail.com ----- [-- Attachment #2: Type: text/html, Size: 2813 bytes --]
On Mon, Feb 08, 2021 at 10:32:03AM -0800, Justin Coffey wrote:
> My question then is, are there any examples of projects that maintained
> discipline, focus and relevance over years/decades that serve as counter
> examples to the above statement(s)? OpenBSD? Go? Is there anything to
> learn here?
I also think it is team size. We never had more than 8 engineers on
BitKeeper, and the core was really 4, and we kept adding features and the
main code topped out at 128K lines. It got to 120K or so pretty quickly
and then we just kept pushing for changesets that removed as much code
as they added, bonus points for when they removed more than they added.
I think if we had more people it would have been harder to keep things
small and clean.
[-- Attachment #1: Type: text/plain, Size: 2096 bytes --] On Mon, 8 Feb 2021 at 13:12, Will Senn <will.senn@gmail.com> wrote: > > Anyhow, I have thoroughly enjoyed the Mac ride, but with Catalina, the > ride got really bumpy (too much phone home, no more 32 bit programs and > since Adobe Acrobat X, which I own, outright, isn't 64 bit, among other > apps, this just in not an option for me), and with Big Sur, it's gotten > worse, potholes, sinkholes, and suchlike, and the interface is downright > patronizing (remember Microsoft Bob?). So, here I am, Mr. > Run-Any-Cutting-Edge-OS anytime guy, hanging on tooth and nail to Mac OS > Mojave where I still have a modicum of control over my environment. > I hear you on this one. I'm sticking with Mojave as well on my Mac laptop, but part of that is also because I refuse to give up on what is now an almost eight year old machine that has no real problems and has all of the hardware and ports I want. Apple loves to move quickly and abandon compatibility, and in that respect it's an interesting counterpoint to Linux or a *BSD where you can have decades old binaries that still run. > > And a bonus question, why, oh why, can't we have a contained kernel that > provides minimal functionality (dare I say microkernel), that is securable, > and layers above it that other stuff (everything else) can run on with > auditing and suchlike for traceability? > Oh no, not this can of worms... I bet Clem has quite a bit to say about this, but I'll boil it down to this: Mach bombed spectacularly (check out the Wikipedia article, it's pretty decent) and set the idea in people's heads that microkernels were not the way to go. If you wanted to write a microkernel OS today IMHO you'd need to be fully UNIX compatible, and you'd need to natively write EVERY syscall so that performance isn't horrible. This has turned out to be much harder than one might think at first glance. Just ask the GNU Hurd folks... All said, this is probably a space where the time and effort required to squeeze the last 10%, or 5%, or 1% of performance out of the hardware just isn't worth the time investment. -Henry [-- Attachment #2: Type: text/html, Size: 2874 bytes --]
[-- Attachment #1: Type: text/plain, Size: 1364 bytes --] On Mon, Feb 8, 2021 at 10:12 AM Will Senn <will.senn@gmail.com> wrote: > All, > > My thought for the day and question for the group is... It seems that the > options for a free operating system (free as in freedom) are becoming ever > more limited - Microsoft, this week, announced that their Edge update will > remove Edge Legacy and IE while doing the update - nuts; Mac's desktop is > turning into IOS - ew, ick; and Linux is wild west meets dictatorship and > major corporations are moving in to set their direction (Microsoft, Oracle, > IBM, etc.). FreeBSD we've beat to death over the last couple of weeks, so > I'll leave it out of the mix for now. What in our unix past speaks to the > current circumstance and what do those of you who lived those events see as > possibilities for the next revolution - and, will unix be part of it? > > And a bonus question, why, oh why, can't we have a contained kernel that > provides minimal functionality (dare I say microkernel), that is securable, > and layers above it that other stuff (everything else) can run on with > auditing and suchlike for traceability? > I love Linux, especially Debian lately. But I also have high hopes for Redox OS, and may switch someday: https://en.wikipedia.org/wiki/Redox_(operating_system) https://www.theregister.com/2019/11/29/after_four_years_rusty_os_nearly_selfhosting/ [-- Attachment #2: Type: text/html, Size: 2335 bytes --]
On 2/8/21 10:11 AM, Will Senn wrote:
> Anyhow, I have thoroughly enjoyed the Mac ride, but with Catalina, the ride got really bumpy
The problem now is they have broken enough of the APIs that developers aren't willing to support old
useful releases. Even MAME has recently been forced to abandon support for perfectly functioning machines before
10.15 and I have no desire to purchase any Apple product capable of running that release.
On Mon, Feb 08, 2021 at 10:32:03AM -0800, Justin Coffey wrote:
> This describes pretty much every project I've ever worked on. It starts
> small, with a manageable feature set and a clean and performant codebase
> and then succumbs to external pressure for features and slowly bloats. If
> the features prove useful then the project will live on of course (and
> those features may well be the reason the project lives on), but at some
> point the bloat and techdebt become the dominant development story.
The problem is users all want a different set of features. One
person's "small and clean" is another person's "missing critical
feature".
This is one of the problems which the Linux enterprise distro's see.
Everyone wants everything to stay the same --- except for their own
pet feature. Or because they want their new hardware (say, NVMe,
which wasn't present in the version of the Linux kernel that was
frozen for the enterprise distro three years ago).
Ultimately, the reason why you can't have what you want boils down to
sheer economics. If you want to pay a small team to give you exactly
what you want, but nothing else, then sure, you can have what you
want. If you and two dozen want _exactly_ the same thing, then it
will be a lot cheaper. But if you want something which is free as in
beer, or even the cost of iOS, then it needs to have all of the
features and hardware support for a much larger set of customers.
It's interesting that some folks are complaining about "elistism"; but
they don't seem to recognize that asking for something super small and
clean, that is inherently elitist. I also suspect they don't
*actually* want something _that_ small, in terms of feature set. Do
they *really* want something which is just V7 Unix, with nothing else?
No TCP/IP, no hot-plug USB support? No web browsing?
If so, it shouldn't be that hard to squeeze a PDP-11 into a laptop
form factor, and they can just run V7 Unix. Easy-peasy!
Oh, you wanted more than that? Feature bloat! Feature bloat!
Feature bloat! Shame! Shame! Shame!
- Ted
> Do they *really* want something which is just V7 Unix, with nothing else? > No TCP/IP, no hot-plug USB support? No web browsing? > Oh, you wanted more than that? Feature bloat! Feature bloat! > Feature bloat! Shame! Shame! Shame! % ls /usr/share/man/man2|wc 495 495 7230 % ls /bin|wc 2809 2809 30468 How many of roughly 500 system calls (to say nothing of uncounted ioctl's) do you think are necessary for writing those few crucial capabilities that distinguish Linux from v7? There is undeniably bloat, but only a sliver of it contributes to the distinctive utility of today's systems. Or consider this. Unix grew by about 39 system calls in its first decade, but an average of 40 per decade ever since. Is this accelerated growth more symptomatic of maturity or of cancer? Doug
[-- Attachment #1: Type: text/plain, Size: 1316 bytes --] Once you add the "s" editor (so you have a screen editor) and UUCP, v7 is an adequate daily driver, in my recent experience. No, it didn't actually replace my Mac, but with a way to get data on and off it and a decent editor (which I personally do not feel "ed" is)...it's totally OK. I can edit files and move them around, which honestly is most of my job. Adam On Mon, Feb 8, 2021 at 8:59 PM M Douglas McIlroy < m.douglas.mcilroy@dartmouth.edu> wrote: > > Do they *really* want something which is just V7 Unix, with nothing else? > > No TCP/IP, no hot-plug USB support? No web browsing? > > > Oh, you wanted more than that? Feature bloat! Feature bloat! > > Feature bloat! Shame! Shame! Shame! > > % ls /usr/share/man/man2|wc > 495 495 7230 > % ls /bin|wc > 2809 2809 30468 > > How many of roughly 500 system calls (to say nothing of uncounted > ioctl's) do you think are necessary for writing those few crucial > capabilities that distinguish Linux from v7? There is > undeniably bloat, but only a sliver of it contributes to the > distinctive utility of today's systems. > > Or consider this. Unix grew by about 39 system calls in its first > decade, but an average of 40 > per decade ever since. Is this accelerated growth more symptomatic of > maturity or of cancer? > > Doug > [-- Attachment #2: Type: text/html, Size: 1792 bytes --]
[-- Attachment #1: Type: text/plain, Size: 1285 bytes --] On 2/8/21 9:58 PM, M Douglas McIlroy wrote: >> Do they *really* want something which is just V7 Unix, with nothing else? >> No TCP/IP, no hot-plug USB support? No web browsing? >> Oh, you wanted more than that? Feature bloat! Feature bloat! >> Feature bloat! Shame! Shame! Shame! > % ls /usr/share/man/man2|wc > 495 495 7230 > % ls /bin|wc > 2809 2809 30468 > > How many of roughly 500 system calls (to say nothing of uncounted > ioctl's) do you think are necessary for writing those few crucial > capabilities that distinguish Linux from v7? There is > undeniably bloat, but only a sliver of it contributes to the > distinctive utility of today's systems. > > Or consider this. Unix grew by about 39 system calls in its first > decade, but an average of 40 > per decade ever since. Is this accelerated growth more symptomatic of > maturity or of cancer? > > Doug I couldn't have said it better. I was looking at 'man syscalls' and shaking my head just a couple of days ago. I also just thought, hey, wouldn't it be nice to get a printed copy of the latest manpages - uh, that ain't gonna happen - 3000+ manpages, sheesh. Not sure I was thinking 'man widget_toolbar' would provide me with any deep insights anyway, maybe that's why its in mann. Will [-- Attachment #2: Type: text/html, Size: 1957 bytes --]
On 2/8/21, Will Senn <will.senn@gmail.com> wrote: > > My thought for the day and question for the group is... It seems that > the options for a free operating system (free as in freedom) are > becoming ever more limited - Microsoft, this week, announced that their > Edge update will remove Edge Legacy and IE while doing the update - > nuts; Mac's desktop is turning into IOS - ew, ick; and Linux is wild > west meets dictatorship and major corporations are moving in to set > their direction (Microsoft, Oracle, IBM, etc.). FreeBSD we've beat to > death over the last couple of weeks, so I'll leave it out of the mix for > now. What in our unix past speaks to the current circumstance and what > do those of you who lived those events see as possibilities for the next > revolution - and, will unix be part of it? > Yes, those are almost exactly my thoughts on the current state of OSes. I'm hoping that I can change it by writing my own OS <https://gitlab.com/uxrt/>, because I don't see anyone else making anything that I consider to be a particularly good effort to solve those issues. However, it's still anyone's guess as to whether I will succeed. I'm trying to give it every chance of success that I can though by using existing code wherever possible and focusing on compatibility with Linux (or at least I will be once I get to that point). BTW, I welcome any contributions to UX/RT. I currently am still only working on the allocation subsystem (it's a bit trickier under seL4 than it would be on bare metal due to having to manage capabilities as well as memory), but I think I am pretty close to being able to move on. > > And a bonus question, why, oh why, can't we have a contained kernel that > provides minimal functionality (dare I say microkernel), that is > securable, and layers above it that other stuff (everything else) can > run on with auditing and suchlike for traceability? > A lot of people still seem to believe that microkernels are inherently slow, even though fast microkernels (specifically QNX) predate the slow ones by several years. Also, it seems as if much of the academic microkernel community thinks that there's a need to abandon Unix-like architecture entirely and relegate Unix programs to some kind of "penalty box" compatibility layer, but I don't see any reason why that is the case. Certainly there are a lot of old Unix features that could be either reimplemented in terms of something more modern or just dropped entirely where doing so wouldn't break compatibility, but I still think it's possible to write a modern microkernel-native Unix-like OS that does most of what the various proposed or existing incompatible OSes do. > > I can answer the microkernel question I think. It's discipline. > The only microkernel I ever liked was QNX and I liked it because it was > a MICROkernel. The entire kernel easily fit in a 4K instruction cache. > > The only way that worked was discipline. There were 3 guys who could > touch the kernel, one of them, Dan Hildebrandt, was sort of a friend > of mine, we could, and did, have conversations about the benefits of a > monokernel vs a microkernel. He agreed with me that QNX only worked > because those 3 guys were really careful about what went into the > kernel. There was none of this "Oh, I measured performance and it is > only 1.5% slower now" nonsense, that's death by a hundred paper cuts. > Instead, every change came with before and after cache miss counts > under a benchmark. Stuff that increased the cache misses was heavily > frowned upon. > > Most teams don't have that sort of discipline. They say they do, > they think they do, but when marketing says we have to do $WHATEVER, > it goes in. > I still don't get why QNX hasn't had more influence on anything else even though it's been fairly successful commercially. If i am successful, UX/RT will only be the second usable non-QNX OS with significant QNX influence that I am aware of (after VSTa; there have been a couple other attempts at free QNX-like systems that I am aware of but they haven't really produced anything that could be considered complete). seL4 is fairly similar to QNX's kernel both in terms of architecture and design philosophy. That's why I'm using it in UX/RT. I may end up having to fork it at some point, but I am still going to keep to a strict policy of not adding something to the kernel unless there is no other way to reasonably implement it. For the sake of extensibility and security I'm also going to have a strict policy against adding non-file-based primitives of any kind, which is something that QNX hasn't done (no other OS has AFAICT, not even Plan 9, since it has anonymous memory, whereas UX/RT will use memory-mapped files in a tmpfs instead). On 2/8/21, Dan Stromberg <drsalists@gmail.com> wrote: > > But I also have high hopes for Redox OS, and may switch someday: > https://en.wikipedia.org/wiki/Redox_(operating_system) > https://www.theregister.com/2019/11/29/after_four_years_rusty_os_nearly_selfhosting/ > The biggest problem I see with Redox is that they insist on writing everything from scratch, whereas with UX/RT I am going to use existing code wherever it is reasonable (this includes using the LKL project to get access to basically the full range of Linux device drivers, filesystems, and network protocols). Also, their VFS architecture is a bit questionable IMO. Otherwise I might have been inclined to just contribute to Redox (UX/RT will use quite a bit of Rust, but it won't try to be a "Rust OS" like Redox is). I may end up incorporating more code from Redox though (I'm already going to use their heap allocator but will enhance it with dynamic resizing of the heap). In addition they claim it to be a microkernel when it is actually a Plan 9-style hybrid, since the kernel includes a bunch of Unix system calls as primitives, disqualifying it from being a microkernel. This is unlike QNX where the process server that implements the core of the Unix API is built into the kernel but accessed entirely through IPC and not through its own system calls (the UX/RT process server will be similar in scope to that of QNX and will similarly lack any kind of multi-personality infrastructure and be built alongside the kernel, but will be a separate binary).
On 2/8/21, M Douglas McIlroy <m.douglas.mcilroy@dartmouth.edu> wrote:
>
> How many of roughly 500 system calls (to say nothing of uncounted
> ioctl's) do you think are necessary for writing those few crucial
> capabilities that distinguish Linux from v7? There is
> undeniably bloat, but only a sliver of it contributes to the
> distinctive utility of today's systems.
>
> Or consider this. Unix grew by about 39 system calls in its first
> decade, but an average of 40
> per decade ever since. Is this accelerated growth more symptomatic of
> maturity or of cancer?
>
> Doug
>
I'd say probably the latter. There's no good reason I can see for the
proliferation of system calls. It should be possible to write a system
where everything truly is a file, and reduce the actual primitives to
basically just (variants of) read() and write(), with absolutely
everything else (even other file APIs such as open()/close())
implemented on top of those. That's my plan for UX/RT. Extreme
minimalism of primitives should make things more manageable,
especially as far as access control and extensibility go.
On Mon, Feb 08, 2021 at 10:58:08PM -0500, M Douglas McIlroy wrote:
> > Do they *really* want something which is just V7 Unix, with nothing else?
> > No TCP/IP, no hot-plug USB support? No web browsing?
>
> > Oh, you wanted more than that? Feature bloat! Feature bloat!
> > Feature bloat! Shame! Shame! Shame!
>
> % ls /usr/share/man/man2|wc
> 495 495 7230
> % ls /bin|wc
> 2809 2809 30468
>
> How many of roughly 500 system calls (to say nothing of uncounted
> ioctl's) do you think are necessary for writing those few crucial
> capabilities that distinguish Linux from v7? There is
> undeniably bloat, but only a sliver of it contributes to the
> distinctive utility of today's systems.
Well, let's take a look at those system calls. They fall into a
number of major categories:
*) BSD innovations
*) BSD socket interfaces (so if you want TCP/IP... is it bloat?)
*) BSD job control
*) BSD effective id and its extensions
*) BSD groups
*) New versions to maintain stable ABI's, e.g., (dup vs dup2 vs dup3,
wait vs wait3 vs wait4 vs waitpid, stat vs stat64, lstat vs lstat64,
chown vs chown32, etc.)
*) System V IPC support (is support for enterprise databases like
Oracle "bloat"?)
*) Posix real-time extensions
*) Posix extended attributes
*) Windows file streams support (the original reason for the *at(2)
system calls -- openat, linkat, renameat, and a dozen more)
Ok, that last I'd agree was *pure* bloat, and an amazingly bad idea.
But there are plenty of people who have bugged/begged me to add
windows file streams because they were *convinced* it was a critical
feature. And I dare say bug-for-bugs Windows compatibility was worth
millions of $$$ of potential sales, which is why they agreed to add it
--- and why I kept on getting nagged to add that feature to ext4 (and
I pushed back where the Solaris developers caved, so there. :-)
As for things like System V IPC support, that was only added to Linux
because it was worth $$$, because enterprise databases like DB2 and
Oracle demanded it. Is that evidence of "cancer"? You might not want
it, but that's a great example of "one person's bloat is another
person's critical feature".
Or consider the dozen plus BSD sockets interface, which if removed
would mean no TCP/IP support, and no graphical windowing systems.
Critical feature, or bloat?
But hey, if you only want V7 Unix, why are you complaining? Just go
and use it, and give up on all of this cancerous new features. And I
promise to get off of your lawn. :-)
Cheers,
- Ted
On 2/8/21, Theodore Ts'o <tytso@mit.edu> wrote:
> On Mon, Feb 08, 2021 at 10:58:08PM -0500, M Douglas McIlroy wrote:
>> > Do they *really* want something which is just V7 Unix, with nothing
>> > else?
>> > No TCP/IP, no hot-plug USB support? No web browsing?
>>
>> > Oh, you wanted more than that? Feature bloat! Feature bloat!
>> > Feature bloat! Shame! Shame! Shame!
>>
>> % ls /usr/share/man/man2|wc
>> 495 495 7230
>> % ls /bin|wc
>> 2809 2809 30468
>>
>> How many of roughly 500 system calls (to say nothing of uncounted
>> ioctl's) do you think are necessary for writing those few crucial
>> capabilities that distinguish Linux from v7? There is
>> undeniably bloat, but only a sliver of it contributes to the
>> distinctive utility of today's systems.
>
> Well, let's take a look at those system calls. They fall into a
> number of major categories:
>
> *) BSD innovations
> *) BSD socket interfaces (so if you want TCP/IP... is it bloat?)
> *) BSD job control
> *) BSD effective id and its extensions
> *) BSD groups
> *) New versions to maintain stable ABI's, e.g., (dup vs dup2 vs dup3,
> wait vs wait3 vs wait4 vs waitpid, stat vs stat64, lstat vs lstat64,
> chown vs chown32, etc.)
> *) System V IPC support (is support for enterprise databases like
> Oracle "bloat"?)
> *) Posix real-time extensions
> *) Posix extended attributes
> *) Windows file streams support (the original reason for the *at(2)
> system calls -- openat, linkat, renameat, and a dozen more)
>
> Ok, that last I'd agree was *pure* bloat, and an amazingly bad idea.
> But there are plenty of people who have bugged/begged me to add
> windows file streams because they were *convinced* it was a critical
> feature. And I dare say bug-for-bugs Windows compatibility was worth
> millions of $$$ of potential sales, which is why they agreed to add it
> --- and why I kept on getting nagged to add that feature to ext4 (and
> I pushed back where the Solaris developers caved, so there. :-)
>
> As for things like System V IPC support, that was only added to Linux
> because it was worth $$$, because enterprise databases like DB2 and
> Oracle demanded it. Is that evidence of "cancer"? You might not want
> it, but that's a great example of "one person's bloat is another
> person's critical feature".
>
> Or consider the dozen plus BSD sockets interface, which if removed
> would mean no TCP/IP support, and no graphical windowing systems.
> Critical feature, or bloat?
>
> But hey, if you only want V7 Unix, why are you complaining? Just go
> and use it, and give up on all of this cancerous new features. And I
> promise to get off of your lawn. :-)
>
There's no reason any of that has to be implemented with primitives
though. All of that could be implemented on top of normal file APIs
fairly easily.
Also, sockets are not the ideal interface for a window server IMO. The
only reason they are used is because conventional Unix didn't provide
user-mode file server support until fairly recently (and the support
that's been added to more recent conventional Unices is a hack that
has poor performance and isn't used all that much).
Henry Bent <henry.r.bent@gmail.com> wrote: > Apple loves to move quickly and abandon > compatibility, and in that respect it's an interesting counterpoint to > Linux or a *BSD where you can have decades old binaries that still run. That was true decades ago, but no longer. In the intervening time, all the major Linux distributions have stopped releasing OS's that support 32-bit machines. Even those that support 32-bit CPUs have often desupported the earlier CPUs (like, what was wrong with the 80386?). Essentially NO applications require 64-bit address spaces, so arguably if they wanted to lessen their workload, they should have desupported the 64-bit architectures (or made kernels and OS's that would run on both from a single release). But that wouldn't give them the gee-whiz-look-at-all-the-new-features feeling. I ran 32-bit OS releases on all my 64-bit x86 hardware for years. They ran faster and smaller than the amd64 versions, and also ran old binaries for more than a decade. But their vendors and support teams decided that doing the release-engineering to keep them running was more work than pulling the plug. Even Fedora has desupported the One Laptop Per Child hardware now -- no new releases for millions of kids! And desupported all the other cheap Intel mobile CPUs, let alone your typical desktop 80386, 80486, or Pentium. Have you tried running Linux on a machine without a GPU these days? It's truly sad that to gain stupid animated window tricks, they broke compatability with millions of existing systems. Here's one overview of the niche distros that still have x86 support: https://fossbytes.com/best-lightweight-linux-distros/ Even those are dropping like flies, e.g. Ubuntu MATE now says "For older hardware based on i386. Supported until April 2021", i.e. only til next month! The PuppyLinux.com web site is now a 404. Etc. (I'm not up on what the BSD releases are doing.) John
On 2/8/2021 9:55 PM, John Gilmore wrote: > Henry Bent <henry.r.bent@gmail.com> wrote: >> Apple loves to move quickly and abandon >> compatibility, and in that respect it's an interesting counterpoint to >> Linux or a *BSD where you can have decades old binaries that still run. > That was true decades ago, but no longer. In the intervening time, all > the major Linux distributions have stopped releasing OS's that support > 32-bit machines. Even those that support 32-bit CPUs have often > desupported the earlier CPUs (like, what was wrong with the 80386?). > Essentially NO applications require 64-bit address spaces, so arguably > if they wanted to lessen their workload, they should have desupported > the 64-bit architectures (or made kernels and OS's that would run on > both from a single release). But that wouldn't give them the > gee-whiz-look-at-all-the-new-features feeling. > > I ran 32-bit OS releases on all my 64-bit x86 hardware for years. They > ran faster and smaller than the amd64 versions, and also ran old > binaries for more than a decade. But their vendors and support teams > decided that doing the release-engineering to keep them running was more > work than pulling the plug. > > Even Fedora has desupported the One Laptop Per Child hardware now -- no > new releases for millions of kids! And desupported all the other cheap > Intel mobile CPUs, let alone your typical desktop 80386, 80486, or > Pentium. Have you tried running Linux on a machine without a GPU > these days? It's truly sad that to gain stupid animated window tricks, > they broke compatability with millions of existing systems. > > Here's one overview of the niche distros that still have x86 support: > > https://fossbytes.com/best-lightweight-linux-distros/ > > Even those are dropping like flies, e.g. Ubuntu MATE now says "For older > hardware based on i386. Supported until April 2021", i.e. only til next > month! The PuppyLinux.com web site is now a 404. Etc. > > (I'm not up on what the BSD releases are doing.) > > John i386 has been demoted on FreeBSD: https://lists.freebsd.org/pipermail/freebsd-announce/2021-January/002006.html I don't think there's any change on NetBSD, no idea about OpenBSD but I assume they're the same. In all honest, I don't think that backwards compatibility has ever been that great on Linux -at least not for the last twenty or so years, in my (limited) experience. It's not like Solaris where you could build on 2.4 and there's a good chance it will run on 11 or at least 10.
On 2/9/21 12:55 AM, John Gilmore wrote:
> Henry Bent <henry.r.bent@gmail.com> wrote:
>> Apple loves to move quickly and abandon
>> compatibility, and in that respect it's an interesting counterpoint to
>> Linux or a *BSD where you can have decades old binaries that still run.
> That was true decades ago, but no longer. In the intervening time, all
> the major Linux distributions have stopped releasing OS's that support
> 32-bit machines. Even those that support 32-bit CPUs have often
> desupported the earlier CPUs (like, what was wrong with the 80386?).
> Essentially NO applications require 64-bit address spaces, so arguably
> if they wanted to lessen their workload, they should have desupported
> the 64-bit architectures (or made kernels and OS's that would run on
> both from a single release). But that wouldn't give them the
> gee-whiz-look-at-all-the-new-features feeling.
>
> I ran 32-bit OS releases on all my 64-bit x86 hardware for years. They
> ran faster and smaller than the amd64 versions, and also ran old
> binaries for more than a decade. But their vendors and support teams
> decided that doing the release-engineering to keep them running was more
> work than pulling the plug.
>
> Even Fedora has desupported the One Laptop Per Child hardware now -- no
> new releases for millions of kids! And desupported all the other cheap
> Intel mobile CPUs, let alone your typical desktop 80386, 80486, or
> Pentium. Have you tried running Linux on a machine without a GPU
> these days? It's truly sad that to gain stupid animated window tricks,
> they broke compatability with millions of existing systems.
>
> Here's one overview of the niche distros that still have x86 support:
>
> https://fossbytes.com/best-lightweight-linux-distros/
>
> Even those are dropping like flies, e.g. Ubuntu MATE now says "For older
> hardware based on i386. Supported until April 2021", i.e. only til next
> month! The PuppyLinux.com web site is now a 404. Etc.
>
> (I'm not up on what the BSD releases are doing.)
>
> John
>
Sigh... 32bit will be 2nd tier in FreeBSD 13 :)
Andrew Warkentin <andreww591@gmail.com> wrote:
> A lot of people still seem to believe that microkernels are inherently
> slow, even though fast microkernels (specifically QNX) predate the
> slow ones by several years.
Wait, are we talking about the same operating system called QNX?
We had a customer at Cygnus in the 1990s (perhaps QNX itself) who wanted
us to port the GNU compilers to it. It was the slowest, buggiest system
we ever tried to run our code on. I think they claimed POSIX
compatibility; hollow laugh! It was more like 1970's compatibility. It
had 14-character file names, the shell and utilities regularly
core-dumped when doing ordinary work, everything had built-in random
undocumented line length limits and file size limits and such (which was
also true in V7 -- that's one thing Richard Stallman insisted on fixing
in every GNU utility; see the GNU Coding Standards).
Our GNU compiler tools ran everywhere, they hosted and bootstrapped on
everything. Everything except QNX. Shell scripts and makefiles that
worked on a hundred other UNIX systems were impossible to get working on
QNX. I think we reported dozens of QNX bugs to the vendor, most of which
never got fixed.
Perhaps somewhere under all that crud there was some kind of "fast
microkernel", but you couldn't prove it by me. By the time it got to
user code, the only thing it was fast at was failing. We were trying to
do real work on it, and gave up after some engineers turned the air blue
with incredulous exclamations. I think we ended up cross-compiling the
GNU compilers for it, from some sane system. They still had to fix a
bunch of bugs in their libraries that we had to link with.
I realize this flame is not about microkernels. But perhaps if they had
spent less time optimizing cache hits in the microkernel, the rest of
their system wouldn't have been shot full of obvious holes.
John
$ k-2.9t
K 2.9t 2001-02-14 Copyright (C) 1993-2001 Kx Systems
Evaluation. Not for commercial use.
\ for help. \\ to exit.
This is a *linux* x86 binary from almost exactly 20 years ago running on FreeBSD built from last Wednesday’s sources.
$ uname -rom
FreeBSD 13.0-ALPHA3 amd64
Generally compatibility support for previous versions of FreeBSDs has been decent when I have tried. Though the future for x86 support doesn’t look bright.
> On Feb 8, 2021, at 10:56 PM, John Gilmore <gnu@toad.com> wrote:
>
> (I'm not up on what the BSD releases are doing.)
On Mon, 8 Feb 2021, John Gilmore wrote:
> Wait, are we talking about the same operating system called QNX?
Hi John. I found your comment really interesting. I have less experience
with QNX than you do but I do have a long-standing interest in OS design
and found it interesting from that perspective.
QNX is a high performance real-time OS that has run critical
infrastructure for decades. Notably it's been used to run nuclear power
stations in many countries.
Usage may have declined in recent years but in the vintage you're
discussing (90s) QNX would have been widely used in critical
infrastructure. I contracted to an electricity provider about 8 or 9
years ago and QNX was definitely still there.
Looks like BlackBerry now own QNX and the OS is still out there keeping
critical stuff running.
For the record I am aware that microkernels are not intrinsically slow and
I think they can offer significant advantages. MINIX 3 is an interesing
design, pity about the name.
Cheers,
Rob
>Or consider this. Unix grew by about 39 system calls in its first
>decade, but an average of 40
>per decade ever since. Is this accelerated growth more symptomatic of
>maturity or of cancer?
3rd option: competition. Linux competes against very modern and certainly 'bloated' windows and macos operating system, defining what an OS today is, and we must be better (we are already 4 sure).
> Or consider this. Unix grew by about 39 system calls in its first
> decade, but an average of 40
> > per decade ever since. Is this accelerated growth more symptomatic of
> maturity or of cancer?
Looks like I need a typing tutor. 39 should be 30. And a math tutor, too. 40
should be 100.
Doug
On Mon, Feb 08, 2021 at 11:42:34PM -0800, John Gilmore wrote:
> Andrew Warkentin <andreww591@gmail.com> wrote:
> > A lot of people still seem to believe that microkernels are inherently
> > slow, even though fast microkernels (specifically QNX) predate the
> > slow ones by several years.
>
> Wait, are we talking about the same operating system called QNX?
>
> We had a customer at Cygnus in the 1990s (perhaps QNX itself) who wanted
> us to port the GNU compilers to it. It was the slowest, buggiest system
> we ever tried to run our code on. I think they claimed POSIX
> compatibility; hollow laugh!
Has to be a different system. The QNX I ran on could handle a bunch of
users on terminals on a 286. But that was pre-POSIX, maybe the POSIX
stuff was crap, I wasn't using it then.
On Mon, Feb 08, 2021 at 11:37:38PM -0700, Andrew Warkentin wrote: > > But hey, if you only want V7 Unix, why are you complaining? Just go > > and use it, and give up on all of this cancerous new features. And I > > promise to get off of your lawn. :-) > > > There's no reason any of that has to be implemented with primitives > though. All of that could be implemented on top of normal file APIs > fairly easily. Everything can be implemented in terms of a turing machine tape, so I'm sure that's true. Whether or not it would be *performant* and *secure* in the face of application level bugs might be a different story, though. In fact, some of the terrible semantics of the Posix interfaces exist only because there were traditional Unix vendors on the standards committee insisting on semantics that *could* be implemented using a user-mode library on top of normal file API's (I'm looking at you, fcntl locking semantics, where a close of *any* file descriptor, even a fd cloned via dup(2) or fork(2) will release the lock). So yes, Posix fcntl(2) locking *can* be implemented in terms of normal file API's.... AND IT WAS A TERRIBLE IDEA. (For more details, see [1].) [1] https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html I could go on about other spectactularly bad ideas enshrined in POSIX, such as telldir(2) and seekdir(2), which date all the way back to the assumption that directories should only be implemented in terms of linear linked lists with O(n) lookup performance, but I don't whine about that as feature bloat imposed by external standards, but just the cost of doing business. (Or at least, of relevance.) > Also, sockets are not the ideal interface for a window server IMO. The > only reason they are used is because conventional Unix didn't provide > user-mode file server support until fairly recently (and the support > that's been added to more recent conventional Unices is a hack that > has poor performance and isn't used all that much). I'm not sure what you're referring to; if you mean the *at(2) system calls, which is why they exist in Linux (not for !@#!? Windows file streams support); they are needed to provide secure and performant user-mode file servers for things like Samba. Trying to implement a user-space file server using only the V7 Unix primitives will cause you to have some really horrible Time of Use vs Time of Check (TOUTOC) security gaps; you can narrow the TOUTOC races with some terrible performance sucking impacts, but removing them entirely is almost impossible. The reason why it's not used that much is because a lot of programmers want to be compatible with OS's that don't support those new interfaces --- and so they don't use it. And that's the final thing for folks to remember. There's an old saying, "without software, it's just a paperweight." This is just as true for an OS; if you don't have application software, who cares how clean it is? And it had better be performant, and not just a Posix-compliant layer afterthought demanded by the Product Manager as a checklist feature item. Rob Pike talked about this over two decades ago in his talk, Systems Software Research is Dead[2]. The slide talking about Standards (page #13) is especially relevant here: To be a viable computer system, one must honor a huge list of large, and often changing, standards: TCP/IP, HTTP, HTML, XML, CORBA, Unicode, POSIX, NFS, SMB, MIME, POP, IMAP, X, ... A huge amount of work, but if you don’t honor the standards you’re marginalized. [2] http://herpolhode.com/rob/utah2000.pdf That's because most people aren't going to port or rewrite application software for some random OS, whether it is a research OS or someone's new "simple, clean, reimplementation". And most users do expect to have a working web browser.... and text editor..., and their favorite games, whether it's nethack or spacewars, etc., etc., etc. If you want to call it feature bloat, so be it. But that seems to me like it's an excuse made by people who are bitter that people aren't using (or paying for, or contributing to) their pet operating system. Cheers, - Ted
[-- Attachment #1: Type: text/plain, Size: 1637 bytes --] On Tue, Feb 9, 2021 at 11:14 AM Theodore Ts'o <tytso@mit.edu> wrote: I'm looking at you, > fcntl locking semantics, where a close of *any* file descriptor, even > a fd cloned via dup(2) or fork(2) will release the lock. > From BTSJ 57:6: > The file system maintains no locks visible to the user, nor is there any > restriction on the number of users who may have a file open for reading or > writing. Although it is possible for the contents of a file to become > scrambled when two users write on it simultaneously, in practice > difficulties do not arise. We take the view that locks are neither > necessary nor sufficient, in our environment, to prevent interference > between users of the same file. They are unnecessary because we are not > faced with large, single-file databases maintained by independent > processes. They are insufficient because locks in the ordinary sense, > whereby one user is prevented from writing on a file that another user is > reading, cannot prevent confusion when, for example, both users are editing > a file with an editor that makes a copy of the file being edited. > There are, however, sufficient internal interlocks to maintain the logical > consistency of the file system when two users engage simultaneously in > activities such as writing on the same file, creating files in the same > directory, or deleting each other’s open files. (end) John Cowan http://vrici.lojban.org/~cowan cowan@ccil.org How they ever reached any conclusion at all is starkly unknowable to the human mind. --"Backstage Lensman", Randall Garrett [-- Attachment #2: Type: text/html, Size: 3324 bytes --]
On 09/02/2021 06:03, Robert Brockway wrote (in part):
> Looks like BlackBerry now own QNX and the OS is still out there
> keeping critical stuff running.
According to their website, they are in 175 million vehicles (and a
bunch of other things).
N.
On 08/02/2021 22:58, M Douglas McIlroy wrote: > % ls /usr/share/man/man2|wc > 495 495 7230 > % ls /bin|wc > 2809 2809 30468 Whoa! Is this Linux? On my Solaris 10 boxen, I find: [~]=> ls /bin |wc 1113 1113 10256 [~]=> ls /usr/share/man/man2|wc 219 219 2299 N. > How many of roughly 500 system calls (to say nothing of uncounted > ioctl's) do you think are necessary for writing those few crucial > capabilities that distinguish Linux from v7? There is > undeniably bloat, but only a sliver of it contributes to the > distinctive utility of today's systems. > > Or consider this. Unix grew by about 39 system calls in its first > decade, but an average of 40 > per decade ever since. Is this accelerated growth more symptomatic of > maturity or of cancer? > > Doug
Theodore Ts'o writes:
> On Mon, Feb 08, 2021 at 10:58:08PM -0500, M Douglas McIlroy wrote:
> > > Do they *really* want something which is just V7 Unix, with nothing else?
> > > No TCP/IP, no hot-plug USB support? No web browsing?
> >
> > > Oh, you wanted more than that? Feature bloat! Feature bloat!
> > > Feature bloat! Shame! Shame! Shame!
> >
> > % ls /usr/share/man/man2|wc
> > 495 495 7230
> > % ls /bin|wc
> > 2809 2809 30468
> >
> > How many of roughly 500 system calls (to say nothing of uncounted
> > ioctl's) do you think are necessary for writing those few crucial
> > capabilities that distinguish Linux from v7? There is
> > undeniably bloat, but only a sliver of it contributes to the
> > distinctive utility of today's systems.
>
> Well, let's take a look at those system calls. They fall into a
> number of major categories:
>
> *) BSD innovations
> *) BSD socket interfaces (so if you want TCP/IP... is it bloat?)
> *) BSD job control
> *) BSD effective id and its extensions
> *) BSD groups
> *) New versions to maintain stable ABI's, e.g., (dup vs dup2 vs dup3,
> wait vs wait3 vs wait4 vs waitpid, stat vs stat64, lstat vs lstat64,
> chown vs chown32, etc.)
> *) System V IPC support (is support for enterprise databases like
> Oracle "bloat"?)
> *) Posix real-time extensions
> *) Posix extended attributes
> *) Windows file streams support (the original reason for the *at(2)
> system calls -- openat, linkat, renameat, and a dozen more)
>
> Ok, that last I'd agree was *pure* bloat, and an amazingly bad idea.
> But there are plenty of people who have bugged/begged me to add
> windows file streams because they were *convinced* it was a critical
> feature. And I dare say bug-for-bugs Windows compatibility was worth
> millions of $$$ of potential sales, which is why they agreed to add it
> --- and why I kept on getting nagged to add that feature to ext4 (and
> I pushed back where the Solaris developers caved, so there. :-)
>
> As for things like System V IPC support, that was only added to Linux
> because it was worth $$$, because enterprise databases like DB2 and
> Oracle demanded it. Is that evidence of "cancer"? You might not want
> it, but that's a great example of "one person's bloat is another
> person's critical feature".
>
> Or consider the dozen plus BSD sockets interface, which if removed
> would mean no TCP/IP support, and no graphical windowing systems.
> Critical feature, or bloat?
>
> But hey, if you only want V7 Unix, why are you complaining? Just go
> and use it, and give up on all of this cancerous new features. And I
> promise to get off of your lawn. :-)
I'm with Doug here. Some time ago, I was asked to give a talk at OSU
based on my life with UNIX. Mostly figured out what to say while
driving down there. I started the talk by posing the question of what
makes UNIX great. I said that while many people had different answers,
to me it was the good abstractions and the composability of programs
that it supported. I then asked why so many people had forgotten that
which started a very lively discussion.
When I look at linux (and pretty much anything else today), the loss of
good abstractions is quite evident and in my opinion is responsible for
much of the bloat and mess.
This didn't begin with linux. To me, it started when UNIX moved out of
research. Changes were made without any artistry or elegance. And it
happened in BSD-land too. I never liked the socket interface, would
have preferred to open("/dev/ip"). But, as Clem and I have discussed,
the socket API was the result of the original code coming from elsewhere
(BBN I think), and the political desire to keep the networking code
separable from the rest of the kernel.
At this point in time, I feel that linux is suffering from the tragedy
of the commons. I'm not interested in getting into an argument over
this. My opinion is that I think that more direction would have helped.
I think that there was a golden era for contribution that has passed.
There was a time when because of cost of machines and education that
open-source contributors were somewhat of the same caliber. But now that
machines are essentially free and every java programmer thinks that they
know everything, the quality of contributions and especially design has
dropped.
While I use linux, it's no longer a system that I trust in any way. It's
huge, bloated, rats-nest undocumented code. Yes, I know that a small
percentage of people like Ted spend a lot of time on the code and therefore
know it well. But I find it an impenetrable mess. When I first had a need
to look at the UNIX kernel, it was easy to figure out what was going on. Not
so with linux. Of course, I'm getting old and maybe not as good at this stuff
as I used to be.
You might claim that the code is stable and debugged, but if that was the case
then I wouldn't be getting a new kernel on practically every daily update.
To me, it also seems like linux is focused on two ends of the spectrum with
the middle mostly ignored. Lots of focus on embedded and data centers. As
a desktop system, it's tolerable but generally annoying. Again, somewhat a
function of the tragedy of the commons; the environment consists of programs
with little consistency. I think that if one compared the user interfaces to,
let's say, gimp, audacity, and blender, the common features would be close to
the null set. And those are just a few "major" programs.
So a more interesting question to me is, when is linux going to die under its
own weight, and what if anything will replace it. With the exception of
Windows, most everything today has some roots in UNIX. If one was going to
reimagine a system going back to the UNIX philosophy, what would it look like?
Is there likely to ever be an opportunity for a replacement, designed system
or are we stuck with the status quo forever?
Jon
On Mon, Feb 08, 2021 at 10:55:50PM -0800, John Gilmore wrote:
>
> That was true decades ago, but no longer. In the intervening time, all
> the major Linux distributions have stopped releasing OS's that support
> 32-bit machines. Even those that support 32-bit CPUs have often
> desupported the earlier CPUs (like, what was wrong with the 80386?).
> Essentially NO applications require 64-bit address spaces, so arguably
> if they wanted to lessen their workload, they should have desupported
> the 64-bit architectures (or made kernels and OS's that would run on
> both from a single release). But that wouldn't give them the
> gee-whiz-look-at-all-the-new-features feeling.
So there is currently a *single* volunteer supporting the 32-bit i386
platform for Debian, and in December 2020 there was an e-mail thread
asking whether there were volunteer resources to be able to provide
the necessary support (testing installers, building and testing
packages for security updates, etc.) for the 3.5 years of stable
support. I don't believe the final decision has been made, but if
more people were willing to volunteer to make it happen, or to pay $$$
to provide that support, I'm sure Debian would be very happy to keep
i386 on life support for the next stable release.
Ultimately, it's all about what people are willing to support by
providing direct volunteer support, or by putting the money where
their mouth is. "We have met the enemy, and he is us."
- Ted
On 2/9/21 12:31 PM, John Cowan wrote: > From BTSJ 57:6: > > The file system maintains no locks visible to the user, nor is there > any restriction on the number of users who may have a file open for > reading or writing. Although it is possible for the contents of a file > to become scrambled when two users write on it simultaneously, in > practice difficulties do not arise.We take the view that locks are > neither necessary nor sufficient, in our environment, to prevent > interference between users of the same file. They are unnecessary > because we are not faced with large, single-file databases maintained > by independent processes. They are insufficient because locks in the > ordinary sense, whereby one user is prevented from writing on a file > that another user is reading, cannot prevent confusion when, for > example, both users are editing a file with an editor that makes a copy > of the file being edited. "In our environment" is doing some pretty heavy lifting there. -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, UTech, CWRU chet@case.edu http://tiswww.cwru.edu/~chet/
My experience was that old QNX (the one that was distributed on a floppy) was certainly wanting, however, the newer one, the one whose source was community open during the 90s and part of the 2000s (if I remember well) was pretty solid and standard. I ported a (significant) number of complex packages to it and made distribution packages without any problem. BTW, it was far advanced, making heavy use of Union file systems (sorta like the modern Snaps of Ubuntu) and the development tools were pretty powerful and comfy to use.
Then it switched hands and the source was closed again. Which was a pity. I still cherish my copies of the source.
En martes, 9 de febrero de 2021 19:25:29 CET, Nemo Nusquam <cym224@gmail.com> escribió:
On 09/02/2021 06:03, Robert Brockway wrote (in part):
> Looks like BlackBerry now own QNX and the OS is still out there
> keeping critical stuff running.
According to their website, they are in 175 million vehicles (and a
bunch of other things).
N.
Many of those mentioned in the fossbytes article have become 64-bit
only. But I can recommend Anti-X (pronounced Antics) as a suitable OS
for an old-but-good i386 box or laptop.
Wesley Parish
On 2/9/21, John Gilmore <gnu@toad.com> wrote:
> Henry Bent <henry.r.bent@gmail.com> wrote:
>> Apple loves to move quickly and abandon
>> compatibility, and in that respect it's an interesting counterpoint to
>> Linux or a *BSD where you can have decades old binaries that still run.
>
> That was true decades ago, but no longer. In the intervening time, all
> the major Linux distributions have stopped releasing OS's that support
> 32-bit machines. Even those that support 32-bit CPUs have often
> desupported the earlier CPUs (like, what was wrong with the 80386?).
> Essentially NO applications require 64-bit address spaces, so arguably
> if they wanted to lessen their workload, they should have desupported
> the 64-bit architectures (or made kernels and OS's that would run on
> both from a single release). But that wouldn't give them the
> gee-whiz-look-at-all-the-new-features feeling.
>
> I ran 32-bit OS releases on all my 64-bit x86 hardware for years. They
> ran faster and smaller than the amd64 versions, and also ran old
> binaries for more than a decade. But their vendors and support teams
> decided that doing the release-engineering to keep them running was more
> work than pulling the plug.
>
> Even Fedora has desupported the One Laptop Per Child hardware now -- no
> new releases for millions of kids! And desupported all the other cheap
> Intel mobile CPUs, let alone your typical desktop 80386, 80486, or
> Pentium. Have you tried running Linux on a machine without a GPU
> these days? It's truly sad that to gain stupid animated window tricks,
> they broke compatability with millions of existing systems.
>
> Here's one overview of the niche distros that still have x86 support:
>
> https://fossbytes.com/best-lightweight-linux-distros/
>
> Even those are dropping like flies, e.g. Ubuntu MATE now says "For older
> hardware based on i386. Supported until April 2021", i.e. only til next
> month! The PuppyLinux.com web site is now a 404. Etc.
>
> (I'm not up on what the BSD releases are doing.)
>
> John
>
>
On Tue, Feb 09, 2021 at 02:02:34PM -0500, Theodore Ts'o wrote: > On Mon, Feb 08, 2021 at 10:55:50PM -0800, John Gilmore wrote: > > That was true decades ago, but no longer. In the intervening time, all > > the major Linux distributions have stopped releasing OS's that support > > 32-bit machines. Even those that support 32-bit CPUs have often > > desupported the earlier CPUs (like, what was wrong with the 80386?). Um, John, it's 33mhz part. Who wants that? > So there is currently a *single* volunteer supporting the 32-bit i386 > platform for Debian, and in December 2020 there was an e-mail thread > asking whether there were volunteer resources to be able to provide > the necessary support (testing installers, building and testing > packages for security updates, etc.) for the 3.5 years of stable > support. John is one damn good programmer, maybe he'll offer. --lm
On Tue, Feb 09, 2021 at 11:00:15AM -0800, Jon Steinhart wrote:
> While I use linux, it's no longer a system that I trust in any way. It's
> huge, bloated, rats-nest undocumented code. Yes, I know that a small
> percentage of people like Ted spend a lot of time on the code and therefore
> know it well. But I find it an impenetrable mess. When I first had a need
> to look at the UNIX kernel, it was easy to figure out what was going on. Not
> so with linux. Of course, I'm getting old and maybe not as good at this stuff
> as I used to be.
Jon, I think you might be old enough to have run v7, if not, like me, you
have read the Lions book and loved it. That kernel didn't do much, it was
uniprocessor, block interrupts design, no networking, it was very basic.
Amazingly nice to read, but it didn't do a lot.
Which is part of the reason we liked it. I loved SunOS 4.x but it was
also a uniprocessor kernel (Greg Limes tried to make it SMP but it was not
a kernel that wanted to be SMP). I loved it because I could understand
it and it was a bit more complex than v7.
The problem space that kernels address these days include SMP, NUMA,
and all sorts of other stuff. I'm not sure I could understand the Linux
kernel even if I were in my prime. It's a harder space, you need to
know a lot more, be skilled at a lot more.
My take is we're old dudes yearning for the days when everything
was simple. Remember when out of order wasn't a thing? Yeah, me too,
I gave up on trying to debug kernels when kadb couldn't tell me what I
was looking at.
--lm
I won't dispute your age, or how many layers of pearl are on the seed Larry, but MP unix was a thing long long ago. I am pretty sure it was written up in BSTJ, and there was Pyramid by 1984/5 and an MP unix system otherwise running at Melbourne University (Rob Elz) around 1988. You might be ancient, but you weren't THAT ancient in the 1980s. anyway, pearls before swine, and age before beauty. -G
I'm going to rant a little here, George, this is not to you, it's to the topic. You are correct but a lot of us learned from a uniprocessor kernel. I certainly did. So there is MP and then there is MP. SGI redid all of their kernel structs so that the most important stuff was on the same cache line. The 1980's and 1990's talked about SMP and then there was a whole series of papers that talked about cache affinity which was geek speak for the S in SMP wasn't. It's just layers and layers of more complexity. Which we can talk about and us older folks want to say isn't needed. We're wrong. I'm with Ted. [1] He's active in the Linux kernel, has been for a long time, he's someone that I know is better than me at kernel stuff. He has kept up, so far as I can tell, when there is yet another layer of something, he's a guy that goes and digs into that something and learns about it, reasons about it, weighs the tradeoffs. He is NOT a guy that just shoves stuff into the kernel on a whim. He thinks, he does the tradeoffs. And he is here saying well, what do you want? You want all the stuff you have or are you willing to go run in v7? Who among us is running v7, or some other kernel that we all love because we understand it? I'd venture a guess that it is noone. We like our X11, we like that we can do "make -j" and you can build a kernel in a minute or two, we like our web browsers, we like a lot of stuff that if you look at it from the lens "but it should be simple", but that lens doesn't give us what we want. I get it. I love the clean simple lines that were the original Unix but we live in a more complex world. Ted is straddling those lines and he's doing the best he can and his best is pretty darn good. I'd argue listen to Ted. He's got the balance. --lm [1] Truth in advertising, Ted and I are friends, we used to hike together in Pacifica, we like each other. On Wed, Feb 10, 2021 at 11:52:21AM +1000, George Michaelson wrote: > I won't dispute your age, or how many layers of pearl are on the seed > Larry, but MP unix was a thing long long ago. > > I am pretty sure it was written up in BSTJ, and there was Pyramid by > 1984/5 and an MP unix system otherwise running at Melbourne University > (Rob Elz) around 1988. > > You might be ancient, but you weren't THAT ancient in the 1980s. > > anyway, pearls before swine, and age before beauty. > > -G -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm
On 2/9/21, Theodore Ts'o <tytso@mit.edu> wrote: > > Everything can be implemented in terms of a turing machine tape, so > I'm sure that's true. Whether or not it would be *performant* and > *secure* in the face of application level bugs might be a different > story, though. > seL4 basically provides no primitives other than send and receive, and UX/RT will just map read()/write()-family APIs onto send and receive (the IPC transport layer won't be trivial, but it will be simpler than those under most other microkernel OSes). Basically everything else will be implemented on top of the read()/write() APIs provided by the transport layer (memory mapping will sort of bypass it, but all user memory will be handled as memory-mapped files, even that which is anonymous on other systems). In order to better map onto seL4 IPC semantics, variants of read()/write() that operate on message registers and a shared buffer will be provided, but these will be compatible with each other and with the traditional versions (messages will be copied on read when a different variant was used to write them). Basically if there were something that couldn't be implemented efficiently and securely on top of a combination of read()/write() and shared memory, that would mean that it couldn't be securely implemented on top of IPC, and I'm not sure that there is anything like that. > > In fact, some of the terrible semantics of the Posix interfaces exist > only because there were traditional Unix vendors on the standards > committee insisting on semantics that *could* be implemented using a > user-mode library on top of normal file API's (I'm looking at you, > fcntl locking semantics, where a close of *any* file descriptor, even > a fd cloned via dup(2) or fork(2) will release the lock). So yes, > Posix fcntl(2) locking *can* be implemented in terms of normal file > API's.... AND IT WAS A TERRIBLE IDEA. (For more details, see [1].) > UX/RT's file locking will be implemented with RPCs to the process server just like open()/close() and the like (which will use read()/write()-family APIs underneath; the initial RPC connection to the process server will be permanently open but it will be possible to create new connections to manipulate the environment of child processes before starting them, so that fork() doesn't have to be a primitive anymore). AFAIK, little actually depends on those rather broken "close one FD and release all locks on that file" semantics, so UX/RT will implement more sane locking semantics by default. There will be a flag to revert to the traditional semantics (probably just implemented at the library level) in case anything actually depends on them. > > I could go on about other spectactularly bad ideas enshrined in POSIX, > such as telldir(2) and seekdir(2), which date all the way back to the > assumption that directories should only be implemented in terms of > linear linked lists with O(n) lookup performance, but I don't whine > about that as feature bloat imposed by external standards, but just > the cost of doing business. (Or at least, of relevance.) > The directory contents that normal user programs actually see on UX/RT will be in a standardized format managed by the VFS (since support for a limited form of union mounts will be built in). > > I'm not sure what you're referring to; if you mean the *at(2) system > calls, which is why they exist in Linux (not for !@#!? Windows file > streams support); they are needed to provide secure and performant > user-mode file servers for things like Samba. Trying to implement a > user-space file server using only the V7 Unix primitives will cause > you to have some really horrible Time of Use vs Time of Check (TOUTOC) > security gaps; you can narrow the TOUTOC races with some terrible > performance sucking impacts, but removing them entirely is almost > impossible. > I'm talking about implementing local filesystems in regular processes (rather than requiring them to be in the kernel) like in QNX or Plan 9, not about network filesystems (although of course network filesystem clients can be implemented on top of such an API). Linux has support for them through FUSE, but AFAIK it has performance issues and isn't very well integrated, so it isn't used all that much. When it comes to normal server processes, UX/RT will mostly depend on checking security on open() rather than on read()/write()-family APIs, which will limit the risk of TOCTTOU vulnerabilities. Where security does have to be checked on reads or writes (such as the ones underlying the RPC implementing open() itself), the data will be copied before checking. Using the traditional read()/write() instead of the new zero-copy equivalents should usually be good enough AFAIK, since they copy to a caller-provided buffer. > > That's because most people aren't going to port or rewrite application > software for some random OS, whether it is a research OS or someone's > new "simple, clean, reimplementation". And most users do expect to > have a working web browser.... and text editor..., and their favorite > games, whether it's nethack or spacewars, etc., etc., etc. > I'm very well aware of that. UX/RT will implement most Linux APIs (either in libraries, servers, or combinations of the two) and will have a Linux binary compatibility layer. The only major incompatibilities are likely to be with stuff that manages sessions and logins (since UX/RT will natively have a mostly process-oriented security model, with no way to fully revert to traditional Unix security outside of running programs in fakeroot containers).
[-- Attachment #1: Type: text/plain, Size: 4609 bytes --] On Tue, Feb 9, 2021 at 9:25 PM Larry McVoy <lm@mcvoy.com> wrote: > I'm going to rant a little here, George, this is not to you, it's to > the topic. > All in all, that was a pretty tame Rant, Larry. :-) Who among us is running v7, or some other kernel that we all love > because we understand it? I'd venture a guess that it is noone. > We like our X11, we like that we can do "make -j" and you can build > a kernel in a minute or two, we like our web browsers, we like a lot > of stuff that if you look at it from the lens "but it should be simple", > but that lens doesn't give us what we want. > I had a stint in life where my "primary" environment was a VT320 hooked up to a VAXstation running VMS, from which I'd telnet to a Unix machine. Subjectively, it was among the more productive times in my life professionally: I felt that I wrote good code and could concentrate on what I was working on. Fast forward 15 years (so a little 10 years ago), I'm sitting in front of a Mac Pro desktop with two large displays and an infinite number of browser tabs open and I feel almost hopelessly productive. I just can't concentrate; I can't find anything; things are beeping at me all the time and I have no idea where the music is coming from. Ads are telling me I should buy all kinds of things I didn't even know I needed; the temptation to read the news, or email, or the plot of some movie I saw an ad for 20 years ago (but never saw) on wikipedia is too great and another 45 minutes are gone. So I go on ebay and find a VT420 in good condition and buy it; it arrives an unproductive week later, and I hook it up to the serial port on my Linux machine at work and configure getty and login and ... wow, this is terrible! It's just too dang and limiting. And that hum from the flyback transformer is annoyingly distracting. The lesson is that we look back at our old environments through the rosy glasses of nostalgia, but we forget the pain points. Yeah, we might moan about the X protocol or the complexity of SMP or filesystems or mmap() or whatever, but hey, programs that I care about to get my work done are already written for those environments, and do I _really_ want to write another shell or terminal program or editor or email client? Actually...no. No, I do not. So I'm sympathetic to this. I get it. I love the clean simple lines that were the original Unix > but we live in a more complex world. But this I take some exception to. Yes, the world is more complex, but part of the complexity of our systems is, as Jon asserts, poor abstractions. It's like the recent discussion of ZFS vs merged VM/Buffer caches: most people don't care. But as a system designer, I do. One _can_ build systems that support graphics and networking without X11 and sockets and with a small number of system calls. One _can_ provide some support for "legacy" systems by papering over the difference with a library (back in the day, someone even ported X11 to Plan 9), but it does get messy and you hit limitations at some point. Ted is straddling those lines > and he's doing the best he can and his best is pretty darn good. > I'd just like to stress I'm not trying to criticize Ted, or anyone else, really. We've got the systems we've got. But a lot of the complexity we've got in those systems comes from trying to retrofit a design that was fundamentally oriented towards a uniprocessor machine onto a multiprocessor system that looks approximately nothing like a PDP-11. I do agree with Jon that much of Linux's complexity is unjustified (functions called `foo` that call `__foo` that calls `__do_foo_for_bar`...I understand this is to limit nesting. But...dang), but much of it is forced by trying to accommodate a particular system model on systems that are no longer really amenable to that model. - Dan C. I'd argue listen to Ted. He's got the balance. > > --lm > > [1] Truth in advertising, Ted and I are friends, we used to hike together > in Pacifica, we like each other. > > On Wed, Feb 10, 2021 at 11:52:21AM +1000, George Michaelson wrote: > > I won't dispute your age, or how many layers of pearl are on the seed > > Larry, but MP unix was a thing long long ago. > > > > I am pretty sure it was written up in BSTJ, and there was Pyramid by > > 1984/5 and an MP unix system otherwise running at Melbourne University > > (Rob Elz) around 1988. > > > > You might be ancient, but you weren't THAT ancient in the 1980s. > > > > anyway, pearls before swine, and age before beauty. > > > > -G > > -- > --- > Larry McVoy lm at mcvoy.com > http://www.mcvoy.com/lm > [-- Attachment #2: Type: text/html, Size: 6123 bytes --]
[-- Attachment #1: Type: text/plain, Size: 616 bytes --] On Tue, Feb 9, 2021 at 6:41 PM Larry McVoy <lm@mcvoy.com> wrote: > Which is part of the reason we liked it. I loved SunOS 4.x but it was > also a uniprocessor kernel (Greg Limes tried to make it SMP but it was not > a kernel that wanted to be SMP). I loved it because I could understand > it and it was a bit more complex than v7. > David Barak and the OS group as Solbourne wwere able to do it for OS/MP. First as a ASMP kernel where CPU0 ran the unix kernel, but scheduled jobs for all the other CPUs, the as SMP where the kernel could run on any CPU with progressively finer locking on each release. Warner [-- Attachment #2: Type: text/html, Size: 973 bytes --]
[-- Attachment #1: Type: text/plain, Size: 555 bytes --] On Tue, Feb 9, 2021 at 6:53 PM George Michaelson <ggm@algebras.org> wrote: > I won't dispute your age, or how many layers of pearl are on the seed > Larry, but MP unix was a thing long long ago. > > I am pretty sure it was written up in BSTJ, and there was Pyramid by > 1984/5 and an MP unix system otherwise running at Melbourne University > (Rob Elz) around 1988. > > You might be ancient, but you weren't THAT ancient in the 1980s. > The first MP Unix was MUNIX dating to the 5th edition and 1975. https://calhoun.nps.edu/handle/10945/20959 Warner [-- Attachment #2: Type: text/html, Size: 1019 bytes --]
For the record, I've never seen the Solbourne code but I wanted to. Seemed like they were Unix people. They took SunOS to a better place. I suspect Greg Limes would agree. On Tue, Feb 09, 2021 at 07:56:00PM -0700, Warner Losh wrote: > On Tue, Feb 9, 2021 at 6:41 PM Larry McVoy <lm@mcvoy.com> wrote: > > > Which is part of the reason we liked it. I loved SunOS 4.x but it was > > also a uniprocessor kernel (Greg Limes tried to make it SMP but it was not > > a kernel that wanted to be SMP). I loved it because I could understand > > it and it was a bit more complex than v7. > > > > David Barak and the OS group as Solbourne wwere able to do it for OS/MP. > First as a ASMP kernel where CPU0 ran the unix kernel, but scheduled jobs > for all the other CPUs, the as SMP where the kernel could run on any CPU > with progressively finer locking on each release. > > Warner -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm
Sounds good, I've been out on my boat and with my feet I'm a mess, love that boat but I need go rest. I'll give this the reply it deserves in the morning. On Tue, Feb 09, 2021 at 09:44:49PM -0500, Dan Cross wrote: > On Tue, Feb 9, 2021 at 9:25 PM Larry McVoy <lm@mcvoy.com> wrote: > > > I'm going to rant a little here, George, this is not to you, it's to > > the topic. > > > > All in all, that was a pretty tame Rant, Larry. :-) > > Who among us is running v7, or some other kernel that we all love > > because we understand it? I'd venture a guess that it is noone. > > We like our X11, we like that we can do "make -j" and you can build > > a kernel in a minute or two, we like our web browsers, we like a lot > > of stuff that if you look at it from the lens "but it should be simple", > > but that lens doesn't give us what we want. > > > > I had a stint in life where my "primary" environment was a VT320 hooked up > to a VAXstation running VMS, from which I'd telnet to a Unix machine. > Subjectively, it was among the more productive times in my life > professionally: I felt that I wrote good code and could concentrate on what > I was working on. > > Fast forward 15 years (so a little 10 years ago), I'm sitting in front of a > Mac Pro desktop with two large displays and an infinite number of browser > tabs open and I feel almost hopelessly productive. I just can't > concentrate; I can't find anything; things are beeping at me all the time > and I have no idea where the music is coming from. Ads are telling me I > should buy all kinds of things I didn't even know I needed; the temptation > to read the news, or email, or the plot of some movie I saw an ad for 20 > years ago (but never saw) on wikipedia is too great and another 45 minutes > are gone. > > So I go on ebay and find a VT420 in good condition and buy it; it arrives > an unproductive week later, and I hook it up to the serial port on my Linux > machine at work and configure getty and login and ... wow, this is > terrible! It's just too dang and limiting. And that hum from the flyback > transformer is annoyingly distracting. > > The lesson is that we look back at our old environments through the rosy > glasses of nostalgia, but we forget the pain points. Yeah, we might moan > about the X protocol or the complexity of SMP or filesystems or mmap() or > whatever, but hey, programs that I care about to get my work done are > already written for those environments, and do I _really_ want to write > another shell or terminal program or editor or email client? Actually...no. > No, I do not. > > So I'm sympathetic to this. > > I get it. I love the clean simple lines that were the original Unix > > but we live in a more complex world. > > > But this I take some exception to. Yes, the world is more complex, but part > of the complexity of our systems is, as Jon asserts, poor abstractions. > It's like the recent discussion of ZFS vs merged VM/Buffer caches: most > people don't care. But as a system designer, I do. One _can_ build systems > that support graphics and networking without X11 and sockets and with a > small number of system calls. One _can_ provide some support for "legacy" > systems by papering over the difference with a library (back in the day, > someone even ported X11 to Plan 9), but it does get messy and you hit > limitations at some point. > > Ted is straddling those lines > > and he's doing the best he can and his best is pretty darn good. > > > > I'd just like to stress I'm not trying to criticize Ted, or anyone else, > really. We've got the systems we've got. But a lot of the complexity we've > got in those systems comes from trying to retrofit a design that was > fundamentally oriented towards a uniprocessor machine onto a multiprocessor > system that looks approximately nothing like a PDP-11. I do agree with Jon > that much of Linux's complexity is unjustified (functions called `foo` that > call `__foo` that calls `__do_foo_for_bar`...I understand this is to limit > nesting. But...dang), but much of it is forced by trying to accommodate a > particular system model on systems that are no longer really amenable to > that model. > > - Dan C. > > I'd argue listen to Ted. He's got the balance. > > > > --lm > > > > [1] Truth in advertising, Ted and I are friends, we used to hike together > > in Pacifica, we like each other. > > > > On Wed, Feb 10, 2021 at 11:52:21AM +1000, George Michaelson wrote: > > > I won't dispute your age, or how many layers of pearl are on the seed > > > Larry, but MP unix was a thing long long ago. > > > > > > I am pretty sure it was written up in BSTJ, and there was Pyramid by > > > 1984/5 and an MP unix system otherwise running at Melbourne University > > > (Rob Elz) around 1988. > > > > > > You might be ancient, but you weren't THAT ancient in the 1980s. > > > > > > anyway, pearls before swine, and age before beauty. > > > > > > -G > > > > -- > > --- > > Larry McVoy lm at mcvoy.com > > http://www.mcvoy.com/lm > > -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm
On 2/9/21, Larry McVoy wrote:
>
> The problem space that kernels address these days include SMP, NUMA,
> and all sorts of other stuff. I'm not sure I could understand the Linux
> kernel even if I were in my prime. It's a harder space, you need to
> know a lot more, be skilled at a lot more.
>
> My take is we're old dudes yearning for the days when everything
> was simple. Remember when out of order wasn't a thing? Yeah, me too,
> I gave up on trying to debug kernels when kadb couldn't tell me what I
> was looking at.
>
> --lm
>
Pure microkernels with indirect message destinations (i.e. not thread
IDs) can simplify things somewhat with regards to multiprocessing,
since almost all OS subsystems are just regular processes that run in
their own contexts and can structure their threads as they please, as
opposed to being kernel subsystems that have to deal with the
concurrency issues that arise from the possibility of being called
from any process context. The microkernel still has to deal with being
called in any context, but it can use simpler mechanisms for dealing
with concurrency than a monolithic kernel would because processes
don't stay in kernel mode for nearly as long as they can in a
monolithic kernel.
On Tue, Feb 9, 2021 at 7:46 PM Dan Cross <crossd@gmail.com> wrote: > > On Tue, Feb 9, 2021 at 9:25 PM Larry McVoy <lm@mcvoy.com> wrote: >> >> I'm going to rant a little here, George, this is not to you, it's to >> the topic. > > > All in all, that was a pretty tame Rant, Larry. :-) > >> Who among us is running v7, or some other kernel that we all love >> because we understand it? I'd venture a guess that it is noone. >> We like our X11, we like that we can do "make -j" and you can build >> a kernel in a minute or two, we like our web browsers, we like a lot >> of stuff that if you look at it from the lens "but it should be simple", >> but that lens doesn't give us what we want. > > > I had a stint in life where my "primary" environment was a VT320 hooked up to a VAXstation running VMS, from which I'd telnet to a Unix machine. Subjectively, it was among the more productive times in my life professionally: I felt that I wrote good code and could concentrate on what I was working on. > > Fast forward 15 years (so a little 10 years ago), I'm sitting in front of a Mac Pro desktop with two large displays and an infinite number of browser tabs open and I feel almost hopelessly productive. I just can't concentrate; I can't find anything; things are beeping at me all the time and I have no idea where the music is coming from. Ads are telling me I should buy all kinds of things I didn't even know I needed; the temptation to read the news, or email, or the plot of some movie I saw an ad for 20 years ago (but never saw) on wikipedia is too great and another 45 minutes are gone. This is the realest description of the modern predicament I have seen! Depending on mood and day, things can be great or awful with modern software for me. In particular, we live in a world of total abundance which is great but also overwhelming. There are no shortage of interesting kernels, libraries, and applications all for free which is particularly amazing having grown up in the brief era of the PC world where most interesting software was shrink-wrapped and cost prohibitive. I can't help but eventually tie the current status quo back to economics. There is a lot of (perceived?) power in flexing a large staff of programmers, whether they are productive or not. It's like having a large standing army during peacetime. The mere existence gives you power in the current market. Look at the first 10 companies on this list https://companiesmarketcap.com/. It's safe to say Aramco has top quality HPC and development staff and the rest are all mostly pure tech plays with large staffs of developers and systems engineers. So in a way, we are sustaining Full Employment Theory for people with *nix skills. That isn't great for quality. Early OS work was done by small teams relative to current, people who really cared about what they were doing. The modern situation is great for people's livelihoods, so unless you are retired make hay while the sun is shining. > So I go on ebay and find a VT420 in good condition and buy it; it arrives an unproductive week later, and I hook it up to the serial port on my Linux machine at work and configure getty and login and ... wow, this is terrible! It's just too dang and limiting. And that hum from the flyback transformer is annoyingly distracting. > > The lesson is that we look back at our old environments through the rosy glasses of nostalgia, but we forget the pain points. Yeah, we might moan about the X protocol or the complexity of SMP or filesystems or mmap() or whatever, but hey, programs that I care about to get my work done are already written for those environments, and do I _really_ want to write another shell or terminal program or editor or email client? Actually...no. No, I do not. > > So I'm sympathetic to this. > >> I get it. I love the clean simple lines that were the original Unix >> but we live in a more complex world. > > > But this I take some exception to. Yes, the world is more complex, but part of the complexity of our systems is, as Jon asserts, poor abstractions. It's like the recent discussion of ZFS vs merged VM/Buffer caches: most people don't care. But as a system designer, I do. One _can_ build systems that support graphics and networking without X11 and sockets and with a small number of system calls. One _can_ provide some support for "legacy" systems by papering over the difference with a library (back in the day, someone even ported X11 to Plan 9), but it does get messy and you hit limitations at some point. > >> Ted is straddling those lines >> and he's doing the best he can and his best is pretty darn good. > > > I'd just like to stress I'm not trying to criticize Ted, or anyone else, really. We've got the systems we've got. But a lot of the complexity we've got in those systems comes from trying to retrofit a design that was fundamentally oriented towards a uniprocessor machine onto a multiprocessor system that looks approximately nothing like a PDP-11. I do agree with Jon that much of Linux's complexity is unjustified (functions called `foo` that call `__foo` that calls `__do_foo_for_bar`...I understand this is to limit nesting. But...dang), but much of it is forced by trying to accommodate a particular system model on systems that are no longer really amenable to that model. > > - Dan C. > >> I'd argue listen to Ted. He's got the balance. >> >> --lm >> >> [1] Truth in advertising, Ted and I are friends, we used to hike together >> in Pacifica, we like each other. >> >> On Wed, Feb 10, 2021 at 11:52:21AM +1000, George Michaelson wrote: >> > I won't dispute your age, or how many layers of pearl are on the seed >> > Larry, but MP unix was a thing long long ago. >> > >> > I am pretty sure it was written up in BSTJ, and there was Pyramid by >> > 1984/5 and an MP unix system otherwise running at Melbourne University >> > (Rob Elz) around 1988. >> > >> > You might be ancient, but you weren't THAT ancient in the 1980s. >> > >> > anyway, pearls before swine, and age before beauty. >> > >> > -G >> >> -- >> --- >> Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm
On Mon, Feb 08, 2021 at 10:43:54AM -0800, Dan Stromberg wrote: > I love Linux, especially Debian lately. I used Debian everywhere (desktops, laptops, servers) both at work and at home. Until they decided to ship it with SystemD and don't give an alternative. I then switched to Devuan, and I can't be happier. It's the exact wonderful Debian experience, with the freedom of choice that Debian always gave me until it didn't anymore. I used to say I would go back in a heart beat if it gave an init alternative to SystemD, but I don't think I would anymore. > But I also have high hopes for Redox OS, and may switch someday: > https://en.wikipedia.org/wiki/Redox_(operating_system) > https://www.theregister.com/2019/11/29/after_four_years_rusty_os_nearly_selfhosting/ Me too, as well as for Haiku OS, even though it's not a UN!X derivative, it's POSIX compliant (or so they claim), and it works very nicely as a desktop OS. Cheers, Ángel
On Mon, Feb 08, 2021 at 10:32:03AM -0800, Justin Coffey wrote:
> My question then is, are there any examples of projects that maintained
> discipline, focus and relevance over years/decades that serve as counter
> examples to the above statement(s)? OpenBSD? Go? Is there anything to
> learn here?
I think OpenBSD, yes, and also Haiku. Every so often somebody argues
that they need to do this or that, and release an stable (not beta)
version, if they want to compete with orher OS. Their answer is always
that they will release it when it's ready and that they aren't try to
compete for a "market" share or anything, but ship a good product when
it's right. I don't know if that will remain true after they ship R1
(binary compatible with BeOS R5), or if they will feel free to start
adding things in for other reasons than making the best OS they can, and
frack it up. I hope not!
Regards.
Ángel
[-- Attachment #1: Type: text/plain, Size: 3235 bytes --] Henry Bent <henry.r.bent@gmail.com> wrote: > Apple loves to move quickly and > abandon compatibility, and in that respect it's an interesting > counterpoint to Linux or a *BSD where you can have decades old > binaries that still run. Nothing in the open-source (OS) world churns and moves around and grows and wiggles as much as Linux. The rate of change of the kernel is simply incomprehensibly staggering. Linux userland isn't much better off. In the commercial (OS) world Microsoft might be churning at a similar rate. Apple on the other hand is relatively stable by comparison -- too stable at times and not always regularly picking up fixes from third-party projects they make use of. Apple is guilty of extreme churn in their GUI though -- at least from a user's perspective (perhaps their APIs are a bit more stable, but somehow from observing them from afar I highly doubt it). Apple's ABIs seem relatively stable -- I still run a few binary apps (albeit quite simple ones) I installed over a decade ago and haven't updated since. At Mon, 8 Feb 2021 22:05:57 -0900, Michael Huff <mphuff@gmail.com> wrote: Subject: Re: [TUHS] Macs and future unix derivatives > > I don't think there's any change on NetBSD, no idea about OpenBSD but > I assume they're the same. Indeed, NetBSD/i386 is still a "tier 1" port as of the 9.1 release last fall: http://wiki.NetBSD.org/ports/ Note there is a caveat with regard to true 80386: "Any i486 or better CPU should work - genuine Intel or a compatible such as Cyrix, AMD, or NexGen." Also NetBSD comes with a good group of compatability and emulation modules for its kernel ABI, including support for both all of its own older releases, as well as for port-specific third-party ABI emulations, such as Linux and SCO Unix. See for example: https://wiki.NetBSD.org/guide/linux/ I've only once ever needed to run a Linux binary, and quite a long time ago, so I'm not so up to date on these things, but it may well be that NetBSD/i386 can run old 32-bit Linux i386 binaries better than any current release of Linux. Personally I work in a world where there's source code for every application I use, which means I generally only need backward compatability for the earliest release I might be running at any given time -- I.e. just enough to keep things running after an upgrade while I get it all re-compiled and tested. > In all honest, I don't think that backwards compatibility has ever > been that great on Linux -at least not for the last twenty or so > years, in my (limited) experience. Really good backward compatability for older kernel and library ABIs is a cornerstone of NetBSD release engineering. It is very well designed and implemented and it is pretty much guaranteed to work or get fixed. Unlike Linux it doesn't rely on shared libraries to work, and also the system shared libraries have very good backward compatability support as well. I.e. NetBSD backward compatability is far more complete and reliable at all levels (including ABIs and their APIs) -- Greg A. Woods <gwoods@acm.org> Kelowna, BC +1 250 762-7675 RoboHack <woods@robohack.ca> Planix, Inc. <woods@planix.com> Avoncote Farms <woods@avoncote.ca> [-- Attachment #2: OpenPGP Digital Signature --] [-- Type: application/pgp-signature, Size: 195 bytes --]
[-- Attachment #1: Type: text/plain, Size: 730 bytes --] On Mon, 8 Feb 2021, Thomas Paulsen wrote: > I'm a UNIX man since the days of Sys4R4. Since then I run Linux and > nothing else than Linux, no dual, triple, quadruple boot into anything > else. To be honest I don't like Apple because it's an elitist system: > People pay at least one thousand bucks just for the feel like being part > of a superior elite. Under Sys4R4 me and 50 others shared one big > resource: no place for any elitist movement. I've been a Unix bod since Edition 5, and I run a few systems (Mac, FreeBSD, Penguin); I don't consider myself to be an elitist (unless you consider "Anything But Windoze" to be elitist). I also paid only about $300 for my 2nd-hand MacBook Pro, and it works fine. -- Dave