9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
* [9fans] pcie for inter machine comms
@ 2011-07-06  9:06 Steve Simon
  2011-07-06 10:46 ` Richard Miller
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Steve Simon @ 2011-07-06  9:06 UTC (permalink / raw)
  To: 9fans

Any of the HPC guys who read this list know of anyone using
pcie with a non-transparent bridge to send data between hosts
as a very fast, very local network?

I seem to remember IBM did somthing like this with back-to-back DMAs in
RS6000 in the early 1990s, but does anyone do it now? or do we feel that
UDP over 40gE is fast enough for anything anyone needs (at present)?

-Steve



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [9fans] pcie for inter machine comms
  2011-07-06  9:06 [9fans] pcie for inter machine comms Steve Simon
@ 2011-07-06 10:46 ` Richard Miller
  2011-07-06 17:11 ` erik quanstrom
  2011-07-06 17:30 ` Jack
  2 siblings, 0 replies; 10+ messages in thread
From: Richard Miller @ 2011-07-06 10:46 UTC (permalink / raw)
  To: 9fans

> I seem to remember IBM did somthing like this with back-to-back DMAs in
> RS6000 in the early 1990s

...  and before that (1970s) you could join 360 and 370 mainframes
into what we would nowadays call a "cluster" by splicing I/O channels
together with a CTC (channel-to-channel adapter).

That doesn't answer your question about PCIe though.




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [9fans] pcie for inter machine comms
  2011-07-06  9:06 [9fans] pcie for inter machine comms Steve Simon
  2011-07-06 10:46 ` Richard Miller
@ 2011-07-06 17:11 ` erik quanstrom
  2011-07-06 17:18   ` ron minnich
  2011-07-06 17:30 ` Jack
  2 siblings, 1 reply; 10+ messages in thread
From: erik quanstrom @ 2011-07-06 17:11 UTC (permalink / raw)
  To: 9fans

> I seem to remember IBM did somthing like this with back-to-back DMAs in
> RS6000 in the early 1990s, but does anyone do it now? or do we feel that
> UDP over 40gE is fast enough for anything anyone needs (at present)?

40gbe has more bandwidth than an 8 lane pcie 2.0 slot, but obviously
bandwidth != latency.  :-)

i'm not sure why you'd use udp if you're using it as a bus, though.

- erik



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [9fans] pcie for inter machine comms
  2011-07-06 17:11 ` erik quanstrom
@ 2011-07-06 17:18   ` ron minnich
  2011-07-06 19:08     ` Lyndon Nerenberg (VE6BBM/VE7TFX)
  2011-07-06 20:57     ` Charles Forsyth
  0 siblings, 2 replies; 10+ messages in thread
From: ron minnich @ 2011-07-06 17:18 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

http://www.epn-online.com/page/new59551/pcie-switch-devices.html and
http://www.plxtech.com/products/expresslane/switches

This seems to come up with every new generation of a new bus, such as
pci or lately pcie. It has a lot of limits and, eventually, people
just go with a fast network. For one thing, it doesn't grow that big
and, for another, error handling can be interesting.

ron



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [9fans] pcie for inter machine comms
  2011-07-06  9:06 [9fans] pcie for inter machine comms Steve Simon
  2011-07-06 10:46 ` Richard Miller
  2011-07-06 17:11 ` erik quanstrom
@ 2011-07-06 17:30 ` Jack
  2 siblings, 0 replies; 10+ messages in thread
From: Jack @ 2011-07-06 17:30 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

On 7/6/2011 4:06 AM, Steve Simon wrote:
> Any of the HPC guys who read this list know of anyone using
> pcie with a non-transparent bridge to send data between hosts
> as a very fast, very local network?
>
> I seem to remember IBM did somthing like this with back-to-back DMAs in
> RS6000 in the early 1990s, but does anyone do it now? or do we feel that
> UDP over 40gE is fast enough for anything anyone needs (at present)?
>
> -Steve
>
It's funny, it's as if you have the exact opposite vision as I have.
One of my " aha!"  moments messing with Plan 9 was the thought that
generic consumer physical layer networks are the "backplane" of hardware
expansion in Plan 9.  This is how I've since viewed Plan 9 and inferno.
In fact my first thought was " damn, well now I don't need all of this
pci[e] crap -- just a NIC".
Ever since then I have been in the development stages of some 9p devices
using microcontrollers of various capabilities.  I had my arduino
getting almost all the way with negotiating a styx session over ethernet
(no auth).   Then I got married..... :)   in due time I will get there.
I've dropped the arduino though, moving on to bigger and better things.
I know this has nothing to do with your question, but I just wanted to
share a point of view.

-Jack



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [9fans] pcie for inter machine comms
  2011-07-06 17:18   ` ron minnich
@ 2011-07-06 19:08     ` Lyndon Nerenberg (VE6BBM/VE7TFX)
  2011-07-06 20:20       ` ron minnich
       [not found]       ` <CAP6exYLfHOROhZsA-D2+wrMPVwyp7rwwAtvHEf9hJadM+nF7hA@mail.gmail.c>
  2011-07-06 20:57     ` Charles Forsyth
  1 sibling, 2 replies; 10+ messages in thread
From: Lyndon Nerenberg (VE6BBM/VE7TFX) @ 2011-07-06 19:08 UTC (permalink / raw)
  To: 9fans

> This seems to come up with every new generation of a new bus, such as
> pci or lately pcie. It has a lot of limits and, eventually, people
> just go with a fast network. For one thing, it doesn't grow that big
> and, for another, error handling can be interesting.

Has anyone done up a driver for an infiniband interface?




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [9fans] pcie for inter machine comms
  2011-07-06 19:08     ` Lyndon Nerenberg (VE6BBM/VE7TFX)
@ 2011-07-06 20:20       ` ron minnich
       [not found]       ` <CAP6exYLfHOROhZsA-D2+wrMPVwyp7rwwAtvHEf9hJadM+nF7hA@mail.gmail.c>
  1 sibling, 0 replies; 10+ messages in thread
From: ron minnich @ 2011-07-06 20:20 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

On Wed, Jul 6, 2011 at 12:08 PM, Lyndon Nerenberg (VE6BBM/VE7TFX)
<lyndon@orthanc.ca> wrote:
>> This seems to come up with every new generation of a new bus, such as
>> pci or lately pcie. It has a lot of limits and, eventually, people
>> just go with a fast network. For one thing, it doesn't grow that big
>> and, for another, error handling can be interesting.
>
> Has anyone done up a driver for an infiniband interface?

let's not go there.

we looked at it in 2005 and the IB stack is so awful that it's not
something anyone wants to touch. It works on LInux, works on few other
systems, it's pretty much a headache.

There's a reason that some Big Companies got out of IB.

ron



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [9fans] pcie for inter machine comms
  2011-07-06 17:18   ` ron minnich
  2011-07-06 19:08     ` Lyndon Nerenberg (VE6BBM/VE7TFX)
@ 2011-07-06 20:57     ` Charles Forsyth
  1 sibling, 0 replies; 10+ messages in thread
From: Charles Forsyth @ 2011-07-06 20:57 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 416 bytes --]

i tried a pair of (i think) ultrastor 34f (VESA Local Bus!) connected back to back
with Fast/Wide/Indifferent SCSI (one was host, the other was target)
between cpu server and file server and it was fine while it lasted.
when something went wrong, the target's bus would hang.
in fact, it also wasn't fine, because there was odd latency that made
it much less effective than the raw figures might have suggested.

[-- Attachment #2: Type: message/rfc822, Size: 2753 bytes --]

From: ron minnich <rminnich@gmail.com>
To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
Subject: Re: [9fans] pcie for inter machine comms
Date: Wed, 6 Jul 2011 10:18:42 -0700
Message-ID: <CAP6exY+bHCXz6_2BpbFbZdczA+n9WqxRsUWymZhbdvDPOc67gg@mail.gmail.com>

http://www.epn-online.com/page/new59551/pcie-switch-devices.html and
http://www.plxtech.com/products/expresslane/switches

This seems to come up with every new generation of a new bus, such as
pci or lately pcie. It has a lot of limits and, eventually, people
just go with a fast network. For one thing, it doesn't grow that big
and, for another, error handling can be interesting.

ron

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [9fans] pcie for inter machine comms
       [not found]       ` <CAP6exYLfHOROhZsA-D2+wrMPVwyp7rwwAtvHEf9hJadM+nF7hA@mail.gmail.c>
@ 2011-07-07 16:41         ` erik quanstrom
  2011-07-08  9:34           ` Richard Miller
  0 siblings, 1 reply; 10+ messages in thread
From: erik quanstrom @ 2011-07-07 16:41 UTC (permalink / raw)
  To: 9fans

> > Has anyone done up a driver for an infiniband interface?
>
> let's not go there.
>
> we looked at it in 2005 and the IB stack is so awful that it's not
> something anyone wants to touch. It works on LInux, works on few other
> systems, it's pretty much a headache.
>
> There's a reason that some Big Companies got out of IB.

there are fewer infiniband-only chipsets out there than there
used to be.  most ib hardware is actually a "converged" network
adapter, which means you can also do ethernet.  it's interesting
to do a line count on some of the linux drivers.  they tend toward
many tens of kloc.  you just haven't lived till you've taken a peek
at a 50kloc driver.  it may be that a reasonable ethernet or tcp/ib
driver could be written.

- erik



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [9fans] pcie for inter machine comms
  2011-07-07 16:41         ` erik quanstrom
@ 2011-07-08  9:34           ` Richard Miller
  0 siblings, 0 replies; 10+ messages in thread
From: Richard Miller @ 2011-07-08  9:34 UTC (permalink / raw)
  To: 9fans

> it's interesting
> to do a line count on some of the linux drivers.  they tend toward
> many tens of kloc.

This is just history repeating itself.  When I first encountered Unix
(6th edition) after working in the IBM mainframe world, my reaction
was "Amazing!  the whole Unix kernel has fewer lines of code than the
OS/360 serial line driver."




^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2011-07-08  9:34 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-07-06  9:06 [9fans] pcie for inter machine comms Steve Simon
2011-07-06 10:46 ` Richard Miller
2011-07-06 17:11 ` erik quanstrom
2011-07-06 17:18   ` ron minnich
2011-07-06 19:08     ` Lyndon Nerenberg (VE6BBM/VE7TFX)
2011-07-06 20:20       ` ron minnich
     [not found]       ` <CAP6exYLfHOROhZsA-D2+wrMPVwyp7rwwAtvHEf9hJadM+nF7hA@mail.gmail.c>
2011-07-07 16:41         ` erik quanstrom
2011-07-08  9:34           ` Richard Miller
2011-07-06 20:57     ` Charles Forsyth
2011-07-06 17:30 ` Jack

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).