The Unix Heritage Society mailing list
 help / color / mirror / Atom feed
* Re: [TUHS] Distributed systems, was:  On the origins of Linux - "an academic question"
@ 2020-01-18  6:15 Andrew Warkentin
  2020-01-18  6:25 ` Rob Pike
  0 siblings, 1 reply; 10+ messages in thread
From: Andrew Warkentin @ 2020-01-18  6:15 UTC (permalink / raw)
  To: The Eunuchs Historic Society

On 1/17/20, Brantley Coile <brantley@coraid.com> wrote:
> what he said.
>
>> On Jan 17, 2020, at 6:20 PM, Rob Pike <robpike@gmail.com> wrote:
>>
>> Plan 9 is not a "single-system-image cluster".
>>
>> -rob
>>
>
>
I guess SSI isn't the right term for Plan 9 clustering since not
everything is shared, although I would still say it has some aspects
of SSI. I was talking about systems that try to make a cluster look
like a single machine in some way even if they don't share everything
(I'm not sure if there's a better term for such systems besides the
rather vague "distributed" which could mean anything from full SSI to
systems that allow transparent access to services/devices on other
machines without trying to make a cluster look like a single system).

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [TUHS] Distributed systems, was: On the origins of Linux - "an academic question"
  2020-01-18  6:15 [TUHS] Distributed systems, was: On the origins of Linux - "an academic question" Andrew Warkentin
@ 2020-01-18  6:25 ` Rob Pike
  2020-01-18  7:30   ` [TUHS] [TUHS -> COFF] " Andrew Warkentin
                     ` (4 more replies)
  0 siblings, 5 replies; 10+ messages in thread
From: Rob Pike @ 2020-01-18  6:25 UTC (permalink / raw)
  To: Andrew Warkentin; +Cc: The Eunuchs Historic Society

[-- Attachment #1: Type: text/plain, Size: 267 bytes --]

I am convinced that large-scale modern compute centers would be run very
differently, with fewer or at least lesser problems, if they were treated
as a single system rather than as a bunch of single-user computers ssh'ed
together.

But history had other ideas.

-rob

[-- Attachment #2: Type: text/html, Size: 356 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [TUHS] [TUHS -> COFF] Distributed systems, was: On the origins of Linux - "an academic question"
  2020-01-18  6:25 ` Rob Pike
@ 2020-01-18  7:30   ` Andrew Warkentin
  2020-01-18 11:55   ` [TUHS] " Álvaro Jurado
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Andrew Warkentin @ 2020-01-18  7:30 UTC (permalink / raw)
  To: coff; +Cc: The Eunuchs Historic Society

On 1/17/20, Rob Pike <robpike@gmail.com> wrote:
> I am convinced that large-scale modern compute centers would be run very
> differently, with fewer or at least lesser problems, if they were treated
> as a single system rather than as a bunch of single-user computers ssh'ed
> together.
>
> But history had other ideas.
>
[moving to COFF since this isn't specific to historic Unix]

For applications (or groups of related applications) that are already
distributed across multiple machines I'd say "cluster as a single
system" definitely makes sense, but I still stand by what I said
earlier about it not being relevant for things like workstations, or
for server applications that are run on a single machine. I think
clustering should be an optional subsystem, rather than something that
is deeply integrated into the core of an OS. With an OS that has
enough extensibiity, it should be possible to have an optional
clustering subsystem without making it feel like an afterthought.

That is what I am planning to do in UX/RT, the OS that I am writing.
The "core supervisor" (seL4 microkernel + process server + low-level
system library) will lack any built-in network support and will just
have support for local file servers using microkernel IPC. The network
stack, 9P client filesystem, 9P server, and clustering subsystem will
all be separate regular processes. The 9P server will use regular
read()/write()-like APIs rather than any special hooks (there will be
read()/write()-like APIs that expose the message registers and shared
buffer to make this more efficient), and similarly the 9P client
filesystem will use the normal API for local filesystem servers (which
will also use read()/write() to send messages). The clustering
subsystem will work by intercepting process API calls and forwarding
them to either the local process server or to a remote instance as
appropriate. Since UX/RT will go even further than Plan 9 with its
file-oriented architecture and make process APIs file-oriented, this
will be transparent to applications. Basically, the way that the
file-oriented process API will work is that every process will have a
special "process server connection" file descriptor that carries all
process server API calls over a minimalist form of RPC, and it will be
possible to redirect this to an intermediary at process startup (of
course, this redirection will be inherited by child processes
automatically).

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [TUHS] Distributed systems, was: On the origins of Linux - "an academic question"
  2020-01-18  6:25 ` Rob Pike
  2020-01-18  7:30   ` [TUHS] [TUHS -> COFF] " Andrew Warkentin
@ 2020-01-18 11:55   ` Álvaro Jurado
  2020-01-20 17:05     ` U'll Be King of the Stars
  2020-01-18 20:24   ` Adam Thornton
                     ` (2 subsequent siblings)
  4 siblings, 1 reply; 10+ messages in thread
From: Álvaro Jurado @ 2020-01-18 11:55 UTC (permalink / raw)
  To: Rob Pike; +Cc: The Eunuchs Historic Society

[-- Attachment #1: Type: text/plain, Size: 790 bytes --]

On Sat, 18 Jan 2020 at 07:26, Rob Pike <robpike@gmail.com> wrote:

> I am convinced that large-scale modern compute centers would be run very
> differently, with fewer or at least lesser problems, if they were treated
> as a single system rather than as a bunch of single-user computers ssh'ed
> together.
>
> But history had other ideas.
>
> -rob
>
>
IBM Watson is not that concept?
I think history (bussiness) tends right now to a decentralized model for
large systems, which is not clear if it’s an advantage for now. Rather than
emulating centralized computing systems across many machines. Anyway, the
main interest for researching about is still keeping the bussiness running.
And for now Unix descendants do it really well.

—
Álvaro Jurado

-- 
Álvaro

[-- Attachment #2: Type: text/html, Size: 1533 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [TUHS] Distributed systems, was: On the origins of Linux - "an academic question"
  2020-01-18  6:25 ` Rob Pike
  2020-01-18  7:30   ` [TUHS] [TUHS -> COFF] " Andrew Warkentin
  2020-01-18 11:55   ` [TUHS] " Álvaro Jurado
@ 2020-01-18 20:24   ` Adam Thornton
  2020-01-18 20:40     ` Toby Thain
  2020-01-20 20:19   ` Clem Cole
  2020-02-08 17:18   ` Ralph Corderoy
  4 siblings, 1 reply; 10+ messages in thread
From: Adam Thornton @ 2020-01-18 20:24 UTC (permalink / raw)
  To: The Eunuchs Historic Society



> On Jan 17, 2020, at 11:25 PM, Rob Pike <robpike@gmail.com> wrote:
> 
> I am convinced that large-scale modern compute centers would be run very differently, with fewer or at least lesser problems, if they were treated as a single system rather than as a bunch of single-user computers ssh'ed together.
> 
> But history had other ideas.

So I’ve clearly got a dog in this fight (https://athornton.github.io/Tucson-Python-Dec-2019/ et al. (also mostly at athornton.github.io)) but I feel like Kubernetes is an interesting compromise in that space.

Admittedly, I come to this not only from a Unix/Linux background but an IBM VM background, but:

1) containerization is a necessary but not sufficient first step
2) black magic to handle the internal networking and endpoint exposure through fairly simple configuration on the user’s part is essential
3) abstractions to describe resources (the current enumerated-objects quota stuff is clunky but sufficient; the CPU/Memory quota stuff is fine), and
4) an automated lifecycle manager

taken together give you a really nifty platform for defining complex applications via composition, which (IMHO) is one of the fundamental wins of Unix, although in this case, it’s really _not_ very analogous to plugging pipeline stages together.

Note that _running_ Kubernetes is still a pain, so unless running data centers is your job, just rent capacity from someone else’s managed Kubernetes service.

Adam

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [TUHS] Distributed systems, was: On the origins of Linux - "an academic question"
  2020-01-18 20:24   ` Adam Thornton
@ 2020-01-18 20:40     ` Toby Thain
  0 siblings, 0 replies; 10+ messages in thread
From: Toby Thain @ 2020-01-18 20:40 UTC (permalink / raw)
  To: Adam Thornton, The Eunuchs Historic Society

On 2020-01-18 3:24 PM, Adam Thornton wrote:
> 
> 
>> On Jan 17, 2020, at 11:25 PM, Rob Pike <robpike@gmail.com> wrote:
>>
>> I am convinced that large-scale modern compute centers would be run very differently, with fewer or at least lesser problems, if they were treated as a single system rather than as a bunch of single-user computers ssh'ed together.
>>
>> But history had other ideas.
> 
> So I’ve clearly got a dog in this fight (https://athornton.github.io/Tucson-Python-Dec-2019/ et al. (also mostly at athornton.github.io)) but I feel like Kubernetes is an interesting compromise in that space.
> 
> Admittedly, I come to this not only from a Unix/Linux background but an IBM VM background, but:
> 
> 1) containerization is a necessary but not sufficient first step

Exactly.

> 2) black magic to handle the internal networking and endpoint exposure through fairly simple configuration on the user’s part is essential
> 3) abstractions to describe resources (the current enumerated-objects quota stuff is clunky but sufficient; the CPU/Memory quota stuff is fine), and
> 4) an automated lifecycle manager

Right, I think we use the terms "orchestration", "service discovery", etc.

--T

> 
> taken together give you a really nifty platform for defining complex applications via composition, which (IMHO) is one of the fundamental wins of Unix, although in this case, it’s really _not_ very analogous to plugging pipeline stages together.
> 
> Note that _running_ Kubernetes is still a pain, so unless running data centers is your job, just rent capacity from someone else’s managed Kubernetes service.
> 
> Adam
> 


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [TUHS] Distributed systems, was: On the origins of Linux - "an academic question"
  2020-01-18 11:55   ` [TUHS] " Álvaro Jurado
@ 2020-01-20 17:05     ` U'll Be King of the Stars
  0 siblings, 0 replies; 10+ messages in thread
From: U'll Be King of the Stars @ 2020-01-20 17:05 UTC (permalink / raw)
  To: Álvaro Jurado, The Eunuchs Historic Society

On 18/01/2020 11:55, Álvaro Jurado wrote:
> IBM Watson is not that concept?

I don't want to go too off topic but out of all the cloud computing 
providers I think that IBM Cloud is the nicest in terms of its interface 
and functionality.

Does anybody else think the same?

I'm considering investing more of my time and resources into learning 
about it.  It certainly has the most interesting ecosystem.  It doesn't 
make me feel bombarded with information overload in the same way that 
AWS does[1].

Nevertheless I think it's useful to have at least one AWS EC2 machine if 
you're working professionally in the field of cloud computing and to 
know one's way around the UI.  It is, after all, the industry standard 
reference platform in this field.

Andrew

[1] https://www.lastweekinaws.com is the saving grace that makes the 
complicated AWS ecosystem understandable to me.
-- 
OpenPGP key: EB28 0338 28B7 19DA DAB0  B193 D21D 996E 883B E5B9

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [TUHS] Distributed systems, was: On the origins of Linux - "an academic question"
  2020-01-18  6:25 ` Rob Pike
                     ` (2 preceding siblings ...)
  2020-01-18 20:24   ` Adam Thornton
@ 2020-01-20 20:19   ` Clem Cole
  2020-02-08 17:18   ` Ralph Corderoy
  4 siblings, 0 replies; 10+ messages in thread
From: Clem Cole @ 2020-01-20 20:19 UTC (permalink / raw)
  To: Rob Pike; +Cc: The Eunuchs Historic Society

[-- Attachment #1: Type: text/plain, Size: 383 bytes --]

On Sat, Jan 18, 2020 at 1:25 AM Rob Pike <robpike@gmail.com> wrote:

> I am convinced that large-scale modern compute centers would be run very
> differently, with fewer or at least lesser problems, if they were treated
> as a single system rather than as a bunch of single-user computers ssh'ed
> together.
>
+1


> But history had other ideas.
>
History and 'good' (cheap) enough.

[-- Attachment #2: Type: text/html, Size: 1229 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [TUHS] Distributed systems, was: On the origins of Linux - "an academic question"
  2020-01-18  6:25 ` Rob Pike
                     ` (3 preceding siblings ...)
  2020-01-20 20:19   ` Clem Cole
@ 2020-02-08 17:18   ` Ralph Corderoy
  2020-02-09 18:42     ` Larry McVoy
  4 siblings, 1 reply; 10+ messages in thread
From: Ralph Corderoy @ 2020-02-08 17:18 UTC (permalink / raw)
  To: Rob Pike; +Cc: The Eunuchs Historic Society

Hi,

Rob Pike wrote:
> I am convinced that large-scale modern compute centers would be run
> very differently, with fewer or at least lesser problems, if they were
> treated as a single system rather than as a bunch of single-user
> computers ssh'ed together.

This reminds me of Western Digital's recent OmniXtend.  It takes a
programmable Ethernet switch and changes the protocol: Ethernet becomes
OmniXtend.  It's a cache-coherency protocol the allows CPUs, custom
ASICs, FPGAs, GPUS, ML accelerators, ... to each hang off an 802.3 PHY
and share one memory pool.
https://blog.westerndigital.com/omnixtend-fabric-innovation-with-risc-v/

It might be a step towards a room of racks being marshalled as a single
computer.

-- 
Cheers, Ralph.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [TUHS] Distributed systems, was: On the origins of Linux - "an academic question"
  2020-02-08 17:18   ` Ralph Corderoy
@ 2020-02-09 18:42     ` Larry McVoy
  0 siblings, 0 replies; 10+ messages in thread
From: Larry McVoy @ 2020-02-09 18:42 UTC (permalink / raw)
  To: Ralph Corderoy; +Cc: The Eunuchs Historic Society

On Sat, Feb 08, 2020 at 05:18:37PM +0000, Ralph Corderoy wrote:
> Hi,
> 
> Rob Pike wrote:
> > I am convinced that large-scale modern compute centers would be run
> > very differently, with fewer or at least lesser problems, if they were
> > treated as a single system rather than as a bunch of single-user
> > computers ssh'ed together.
> 
> This reminds me of Western Digital's recent OmniXtend.  It takes a
> programmable Ethernet switch and changes the protocol: Ethernet becomes
> OmniXtend.  It's a cache-coherency protocol the allows CPUs, custom
> ASICs, FPGAs, GPUS, ML accelerators, ... to each hang off an 802.3 PHY
> and share one memory pool.
> https://blog.westerndigital.com/omnixtend-fabric-innovation-with-risc-v/
> 
> It might be a step towards a room of racks being marshalled as a single
> computer.

Or it might be a giant step backwards in terms of memory latency.

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-02-09 18:43 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-18  6:15 [TUHS] Distributed systems, was: On the origins of Linux - "an academic question" Andrew Warkentin
2020-01-18  6:25 ` Rob Pike
2020-01-18  7:30   ` [TUHS] [TUHS -> COFF] " Andrew Warkentin
2020-01-18 11:55   ` [TUHS] " Álvaro Jurado
2020-01-20 17:05     ` U'll Be King of the Stars
2020-01-18 20:24   ` Adam Thornton
2020-01-18 20:40     ` Toby Thain
2020-01-20 20:19   ` Clem Cole
2020-02-08 17:18   ` Ralph Corderoy
2020-02-09 18:42     ` Larry McVoy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).