9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
* [9fans] Inducing artificial latency
@ 2010-06-15 21:29 John Floren
  2010-06-15 21:39 ` Jorden M
                   ` (4 more replies)
  0 siblings, 5 replies; 23+ messages in thread
From: John Floren @ 2010-06-15 21:29 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

I'm going to be doing some work with 9P and high-latency links this
summer and fall. I need to be able to test things over a high-latency
network, but since I may be modifying the kernel, running stuff on
e.g. mordor is not the best option. I have enough systems here to do
the tests, I just need to artificially add latency between them.

I've come up with a basic idea, but before I go diving in I want to
run it by 9fans and get opinions. What I'm thinking is writing a
synthetic file system that will collect writes to /net; to simulate a
high-latency file copy, you would run this synthetic fs, then do "9fs
remote; cp /n/remote/somefile .". If it's a control message, that gets
sent to the file immediately, but if it's a data write, that data
actually gets held in a queue until some amount of time (say 50ms) has
passed, to simulate network lag. After that time is up, the fs writes
to the underlying file, the data goes out, etc.

I have not really done any networking stuff or filesystems work on
Plan 9; thus far, I've confined myself to kernel work. I could be
completely mistaken about how 9P and networking behave, so I'm asking
for opinions and suggestions. For my part, I'm trying to find good
example filesystems to read (I've been looking at gpsfs, for
instance).


John
--
"With MPI, familiarity breeds contempt. Contempt and nausea. Contempt,
nausea, and fear. Contempt, nausea, fear, and .." -- Ron Minnich



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [9fans] Inducing artificial latency
  2010-06-15 21:29 [9fans] Inducing artificial latency John Floren
@ 2010-06-15 21:39 ` Jorden M
  2010-06-15 22:06   ` Enrique Soriano
  2010-06-15 21:39 ` erik quanstrom
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 23+ messages in thread
From: Jorden M @ 2010-06-15 21:39 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

On Tue, Jun 15, 2010 at 5:29 PM, John Floren <slawmaster@gmail.com> wrote:
> I'm going to be doing some work with 9P and high-latency links this
> summer and fall. I need to be able to test things over a high-latency
> network, but since I may be modifying the kernel, running stuff on
> e.g. mordor is not the best option. I have enough systems here to do
> the tests, I just need to artificially add latency between them.
>

Use a router that can do QoS, much simpler.



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [9fans] Inducing artificial latency
  2010-06-15 21:29 [9fans] Inducing artificial latency John Floren
  2010-06-15 21:39 ` Jorden M
@ 2010-06-15 21:39 ` erik quanstrom
  2010-06-15 21:53   ` David Leimbach
  2010-06-15 21:45 ` Devon H. O'Dell
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 23+ messages in thread
From: erik quanstrom @ 2010-06-15 21:39 UTC (permalink / raw)
  To: 9fans

> I've come up with a basic idea, but before I go diving in I want to
> run it by 9fans and get opinions. What I'm thinking is writing a
> synthetic file system that will collect writes to /net; to simulate a
> high-latency file copy, you would run this synthetic fs, then do "9fs
> remote; cp /n/remote/somefile .". If it's a control message, that gets
> sent to the file immediately, but if it's a data write, that data
> actually gets held in a queue until some amount of time (say 50ms) has
> passed, to simulate network lag. After that time is up, the fs writes
> to the underlying file, the data goes out, etc.

instead, why don't you create a simulated ethernet device
that adds in a delay before sending the packet along to a
real ethernet device.

the only trick will be getting the simulated ethernet to
grab the real ethernet during setup.  i imagine that you'll
need something like
	ether0=type=fake
	fake=real=#l1/ether1 i=10 iσ=20 o=5 oσ=0
in plan9.ini

- erik



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [9fans] Inducing artificial latency
  2010-06-15 21:29 [9fans] Inducing artificial latency John Floren
  2010-06-15 21:39 ` Jorden M
  2010-06-15 21:39 ` erik quanstrom
@ 2010-06-15 21:45 ` Devon H. O'Dell
  2010-06-15 22:43   ` John Floren
  2010-06-16 19:04   ` John Floren
  2010-06-15 21:58 ` Gorka Guardiola
  2010-06-16  5:20 ` Skip Tavakkolian
  4 siblings, 2 replies; 23+ messages in thread
From: Devon H. O'Dell @ 2010-06-15 21:45 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

2010/6/15 John Floren <slawmaster@gmail.com>:
> I'm going to be doing some work with 9P and high-latency links this
> summer and fall. I need to be able to test things over a high-latency
> network, but since I may be modifying the kernel, running stuff on
> e.g. mordor is not the best option. I have enough systems here to do
> the tests, I just need to artificially add latency between them.
>
> I've come up with a basic idea, but before I go diving in I want to
> run it by 9fans and get opinions. What I'm thinking is writing a
> synthetic file system that will collect writes to /net; to simulate a
> high-latency file copy, you would run this synthetic fs, then do "9fs
> remote; cp /n/remote/somefile .". If it's a control message, that gets
> sent to the file immediately, but if it's a data write, that data
> actually gets held in a queue until some amount of time (say 50ms) has
> passed, to simulate network lag. After that time is up, the fs writes
> to the underlying file, the data goes out, etc.
>
> I have not really done any networking stuff or filesystems work on
> Plan 9; thus far, I've confined myself to kernel work. I could be
> completely mistaken about how 9P and networking behave, so I'm asking
> for opinions and suggestions. For my part, I'm trying to find good
> example filesystems to read (I've been looking at gpsfs, for
> instance).

This seems reasonable, but remember that you're only going to be able
to simulate latency in one direction as far as the machine is
concerned. So you'll want to have the same configuration on both
machines, otherwise things will probably look weird. I'd recommend
doing this at a lower level, perhaps introducing a short sleep in IP
output so that the latency is consistent across all requests. And of
course both machines would need to have this configuration. You could
have the stack export another file that you can tune if you want to
maintain a file-system approach. But I think a synthetic FS for this
is overkill.

If you're on a local network, this should do the trick, otherwise you
may need to do some adaptive latency.

--dho

>
> John
> --
> "With MPI, familiarity breeds contempt. Contempt and nausea. Contempt,
> nausea, and fear. Contempt, nausea, fear, and .." -- Ron Minnich
>
>



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [9fans] Inducing artificial latency
  2010-06-15 21:39 ` erik quanstrom
@ 2010-06-15 21:53   ` David Leimbach
  2010-06-15 21:59     ` erik quanstrom
  0 siblings, 1 reply; 23+ messages in thread
From: David Leimbach @ 2010-06-15 21:53 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

[-- Attachment #1: Type: text/plain, Size: 1299 bytes --]

On Tue, Jun 15, 2010 at 2:39 PM, erik quanstrom <quanstro@quanstro.net>wrote:

> > I've come up with a basic idea, but before I go diving in I want to
> > run it by 9fans and get opinions. What I'm thinking is writing a
> > synthetic file system that will collect writes to /net; to simulate a
> > high-latency file copy, you would run this synthetic fs, then do "9fs
> > remote; cp /n/remote/somefile .". If it's a control message, that gets
> > sent to the file immediately, but if it's a data write, that data
> > actually gets held in a queue until some amount of time (say 50ms) has
> > passed, to simulate network lag. After that time is up, the fs writes
> > to the underlying file, the data goes out, etc.
>
> instead, why don't you create a simulated ethernet device
> that adds in a delay before sending the packet along to a
> real ethernet device.
>
> the only trick will be getting the simulated ethernet to
> grab the real ethernet during setup.  i imagine that you'll
> need something like
>        ether0=type=fake
>        fake=real=#l1/ether1 i=10 iσ=20 o=5 oσ=0
> in plan9.ini
>
> - erik
>
> Could one write a filesystem server that is a passthrough to the ethernet
device instead, adding latency? Or is this really going to need to be a new
device?

[-- Attachment #2: Type: text/html, Size: 1713 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [9fans] Inducing artificial latency
  2010-06-15 21:29 [9fans] Inducing artificial latency John Floren
                   ` (2 preceding siblings ...)
  2010-06-15 21:45 ` Devon H. O'Dell
@ 2010-06-15 21:58 ` Gorka Guardiola
  2010-06-15 22:07   ` erik quanstrom
  2010-06-16  5:20 ` Skip Tavakkolian
  4 siblings, 1 reply; 23+ messages in thread
From: Gorka Guardiola @ 2010-06-15 21:58 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

On Tue, Jun 15, 2010 at 11:29 PM, John Floren <slawmaster@gmail.com> wrote:
> I'm going to be doing some work with 9P and high-latency links this
> summer and fall. I need to be able to test things over a high-latency
> network, but since I may be modifying the kernel, running stuff on
> e.g. mordor is not the best option. I have enough systems here to do
> the tests, I just need to artificially add latency between them.

Last time I needed something similar, I just run a modified iostats.
--
- curiosity sKilled the cat



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [9fans] Inducing artificial latency
  2010-06-15 21:53   ` David Leimbach
@ 2010-06-15 21:59     ` erik quanstrom
  0 siblings, 0 replies; 23+ messages in thread
From: erik quanstrom @ 2010-06-15 21:59 UTC (permalink / raw)
  To: 9fans

> > the only trick will be getting the simulated ethernet to
> > grab the real ethernet during setup.  i imagine that you'll
> > need something like
> >        ether0=type=fake
> >        fake=real=#l1/ether1 i=10 iσ=20 o=5 oσ=0
> > in plan9.ini
> >
> > - erik
> >
> > Could one write a filesystem server that is a passthrough to the ethernet
> device instead, adding latency? Or is this really going to need to be a new
> device?

all the ethernet drivers are part of the same device,
devether.

you would need to write a new instance of the device,
but there are just a few functions you need to implement.
most will be passthrough to the real ethernet device.

- erik



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [9fans] Inducing artificial latency
  2010-06-15 21:39 ` Jorden M
@ 2010-06-15 22:06   ` Enrique Soriano
  0 siblings, 0 replies; 23+ messages in thread
From: Enrique Soriano @ 2010-06-15 22:06 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

On Tue, Jun 15, 2010 at 11:39 PM, Jorden M <jrm8005@gmail.com> wrote:
> On Tue, Jun 15, 2010 at 5:29 PM, John Floren <slawmaster@gmail.com> wrote:
>> I'm going to be doing some work with 9P and high-latency links this
>> summer and fall. I need to be able to test things over a high-latency
>> network, but since I may be modifying the kernel, running stuff on
>> e.g. mordor is not the best option. I have enough systems here to do
>> the tests, I just need to artificially add latency between them.
>>
>
> Use a router that can do QoS, much simpler.

You can also put a BSD in the middle
and use Dummynet to generate the latency:

http://info.iet.unipi.it/~luigi/dummynet/


Regards
q



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [9fans] Inducing artificial latency
  2010-06-15 21:58 ` Gorka Guardiola
@ 2010-06-15 22:07   ` erik quanstrom
  2010-06-15 22:14     ` Gorka Guardiola
  0 siblings, 1 reply; 23+ messages in thread
From: erik quanstrom @ 2010-06-15 22:07 UTC (permalink / raw)
  To: 9fans

> Last time I needed something similar, I just run a modified iostats.

how does iostats add latency?

- erik



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [9fans] Inducing artificial latency
  2010-06-15 22:07   ` erik quanstrom
@ 2010-06-15 22:14     ` Gorka Guardiola
  2010-06-22 14:53       ` John Floren
  0 siblings, 1 reply; 23+ messages in thread
From: Gorka Guardiola @ 2010-06-15 22:14 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

On Wed, Jun 16, 2010 at 12:07 AM, erik quanstrom <quanstro@quanstro.net> wrote:
>> Last time I needed something similar, I just run a modified iostats.
>
> how does iostats add latency?
>

hence the modified.

--
- curiosity sKilled the cat



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [9fans] Inducing artificial latency
  2010-06-15 21:45 ` Devon H. O'Dell
@ 2010-06-15 22:43   ` John Floren
  2010-06-15 23:01     ` Devon H. O'Dell
  2010-06-16 19:04   ` John Floren
  1 sibling, 1 reply; 23+ messages in thread
From: John Floren @ 2010-06-15 22:43 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

On Tue, Jun 15, 2010 at 5:45 PM, Devon H. O'Dell <devon.odell@gmail.com> wrote:
> 2010/6/15 John Floren <slawmaster@gmail.com>:
>> I'm going to be doing some work with 9P and high-latency links this
>> summer and fall. I need to be able to test things over a high-latency
>> network, but since I may be modifying the kernel, running stuff on
>> e.g. mordor is not the best option. I have enough systems here to do
>> the tests, I just need to artificially add latency between them.
>>
>> I've come up with a basic idea, but before I go diving in I want to
>> run it by 9fans and get opinions. What I'm thinking is writing a
>> synthetic file system that will collect writes to /net; to simulate a
>> high-latency file copy, you would run this synthetic fs, then do "9fs
>> remote; cp /n/remote/somefile .". If it's a control message, that gets
>> sent to the file immediately, but if it's a data write, that data
>> actually gets held in a queue until some amount of time (say 50ms) has
>> passed, to simulate network lag. After that time is up, the fs writes
>> to the underlying file, the data goes out, etc.
>>
>> I have not really done any networking stuff or filesystems work on
>> Plan 9; thus far, I've confined myself to kernel work. I could be
>> completely mistaken about how 9P and networking behave, so I'm asking
>> for opinions and suggestions. For my part, I'm trying to find good
>> example filesystems to read (I've been looking at gpsfs, for
>> instance).
>
> This seems reasonable, but remember that you're only going to be able
> to simulate latency in one direction as far as the machine is
> concerned. So you'll want to have the same configuration on both
> machines, otherwise things will probably look weird. I'd recommend
> doing this at a lower level, perhaps introducing a short sleep in IP
> output so that the latency is consistent across all requests. And of
> course both machines would need to have this configuration. You could
> have the stack export another file that you can tune if you want to
> maintain a file-system approach. But I think a synthetic FS for this
> is overkill.
>

My plan was to have each side running the fs. I had thought about
sticking a sleep in the kernel's ip code right before it writes to the
device, but since I planned on netbooting the test systems (so much
easier to bring in a new kernel), I thought it might be advantageous
to only slow down for one process. I suppose I could make it sleep
conditionally, like make the process write to a ctl file if it wants
latency, then sleep only for that pid and its children.


John
--
"With MPI, familiarity breeds contempt. Contempt and nausea. Contempt,
nausea, and fear. Contempt, nausea, fear, and .." -- Ron Minnich



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [9fans] Inducing artificial latency
  2010-06-15 22:43   ` John Floren
@ 2010-06-15 23:01     ` Devon H. O'Dell
  0 siblings, 0 replies; 23+ messages in thread
From: Devon H. O'Dell @ 2010-06-15 23:01 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

2010/6/15 John Floren <slawmaster@gmail.com>:
> On Tue, Jun 15, 2010 at 5:45 PM, Devon H. O'Dell <devon.odell@gmail.com> wrote:
>> 2010/6/15 John Floren <slawmaster@gmail.com>:
>>> I'm going to be doing some work with 9P and high-latency links this
>>> summer and fall. I need to be able to test things over a high-latency
>>> network, but since I may be modifying the kernel, running stuff on
>>> e.g. mordor is not the best option. I have enough systems here to do
>>> the tests, I just need to artificially add latency between them.
>>>
>>> I've come up with a basic idea, but before I go diving in I want to
>>> run it by 9fans and get opinions. What I'm thinking is writing a
>>> synthetic file system that will collect writes to /net; to simulate a
>>> high-latency file copy, you would run this synthetic fs, then do "9fs
>>> remote; cp /n/remote/somefile .". If it's a control message, that gets
>>> sent to the file immediately, but if it's a data write, that data
>>> actually gets held in a queue until some amount of time (say 50ms) has
>>> passed, to simulate network lag. After that time is up, the fs writes
>>> to the underlying file, the data goes out, etc.
>>>
>>> I have not really done any networking stuff or filesystems work on
>>> Plan 9; thus far, I've confined myself to kernel work. I could be
>>> completely mistaken about how 9P and networking behave, so I'm asking
>>> for opinions and suggestions. For my part, I'm trying to find good
>>> example filesystems to read (I've been looking at gpsfs, for
>>> instance).
>>
>> This seems reasonable, but remember that you're only going to be able
>> to simulate latency in one direction as far as the machine is
>> concerned. So you'll want to have the same configuration on both
>> machines, otherwise things will probably look weird. I'd recommend
>> doing this at a lower level, perhaps introducing a short sleep in IP
>> output so that the latency is consistent across all requests. And of
>> course both machines would need to have this configuration. You could
>> have the stack export another file that you can tune if you want to
>> maintain a file-system approach. But I think a synthetic FS for this
>> is overkill.
>>
>
> My plan was to have each side running the fs. I had thought about
> sticking a sleep in the kernel's ip code right before it writes to the
> device, but since I planned on netbooting the test systems (so much
> easier to bring in a new kernel), I thought it might be advantageous
> to only slow down for one process. I suppose I could make it sleep
> conditionally, like make the process write to a ctl file if it wants
> latency, then sleep only for that pid and its children.

It is actually possible to get to the process sending the packet in
the IP stack.

--dho

>
> John
> --
> "With MPI, familiarity breeds contempt. Contempt and nausea. Contempt,
> nausea, and fear. Contempt, nausea, fear, and .." -- Ron Minnich
>
>



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [9fans] Inducing artificial latency
  2010-06-15 21:29 [9fans] Inducing artificial latency John Floren
                   ` (3 preceding siblings ...)
  2010-06-15 21:58 ` Gorka Guardiola
@ 2010-06-16  5:20 ` Skip Tavakkolian
  4 siblings, 0 replies; 23+ messages in thread
From: Skip Tavakkolian @ 2010-06-16  5:20 UTC (permalink / raw)
  To: 9fans

any reason loopback(3) wont work? it has options for delay and latency.

> I'm going to be doing some work with 9P and high-latency links this
> summer and fall. I need to be able to test things over a high-latency
> network, but since I may be modifying the kernel, running stuff on
> e.g. mordor is not the best option. I have enough systems here to do
> the tests, I just need to artificially add latency between them.
>
> I've come up with a basic idea, but before I go diving in I want to
> run it by 9fans and get opinions. What I'm thinking is writing a
> synthetic file system that will collect writes to /net; to simulate a
> high-latency file copy, you would run this synthetic fs, then do "9fs
> remote; cp /n/remote/somefile .". If it's a control message, that gets
> sent to the file immediately, but if it's a data write, that data
> actually gets held in a queue until some amount of time (say 50ms) has
> passed, to simulate network lag. After that time is up, the fs writes
> to the underlying file, the data goes out, etc.
>
> I have not really done any networking stuff or filesystems work on
> Plan 9; thus far, I've confined myself to kernel work. I could be
> completely mistaken about how 9P and networking behave, so I'm asking
> for opinions and suggestions. For my part, I'm trying to find good
> example filesystems to read (I've been looking at gpsfs, for
> instance).
>
>
> John
> --
> "With MPI, familiarity breeds contempt. Contempt and nausea. Contempt,
> nausea, and fear. Contempt, nausea, fear, and .." -- Ron Minnich




^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [9fans] Inducing artificial latency
  2010-06-15 21:45 ` Devon H. O'Dell
  2010-06-15 22:43   ` John Floren
@ 2010-06-16 19:04   ` John Floren
  2010-06-16 22:15     ` Francisco J Ballesteros
  1 sibling, 1 reply; 23+ messages in thread
From: John Floren @ 2010-06-16 19:04 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

On Tue, Jun 15, 2010 at 9:45 PM, Devon H. O'Dell <devon.odell@gmail.com> wrote:
> 2010/6/15 John Floren <slawmaster@gmail.com>:
>> I'm going to be doing some work with 9P and high-latency links this
>> summer and fall. I need to be able to test things over a high-latency
>> network, but since I may be modifying the kernel, running stuff on
>> e.g. mordor is not the best option. I have enough systems here to do
>> the tests, I just need to artificially add latency between them.
>>
>> I've come up with a basic idea, but before I go diving in I want to
>> run it by 9fans and get opinions. What I'm thinking is writing a
>> synthetic file system that will collect writes to /net; to simulate a
>> high-latency file copy, you would run this synthetic fs, then do "9fs
>> remote; cp /n/remote/somefile .". If it's a control message, that gets
>> sent to the file immediately, but if it's a data write, that data
>> actually gets held in a queue until some amount of time (say 50ms) has
>> passed, to simulate network lag. After that time is up, the fs writes
>> to the underlying file, the data goes out, etc.
>>
>> I have not really done any networking stuff or filesystems work on
>> Plan 9; thus far, I've confined myself to kernel work. I could be
>> completely mistaken about how 9P and networking behave, so I'm asking
>> for opinions and suggestions. For my part, I'm trying to find good
>> example filesystems to read (I've been looking at gpsfs, for
>> instance).
>
> This seems reasonable, but remember that you're only going to be able
> to simulate latency in one direction as far as the machine is
> concerned. So you'll want to have the same configuration on both
> machines, otherwise things will probably look weird. I'd recommend
> doing this at a lower level, perhaps introducing a short sleep in IP
> output so that the latency is consistent across all requests. And of
> course both machines would need to have this configuration. You could
> have the stack export another file that you can tune if you want to
> maintain a file-system approach. But I think a synthetic FS for this
> is overkill.
>
> If you're on a local network, this should do the trick, otherwise you
> may need to do some adaptive latency.
>
> --dho

I ended up just sticking a sleep in /sys/src/9/ip/devip.c right before
data gets written to the device. This seems to add the desired latency
with no trouble. I've decided that when running tests I'll probably
just boot the testing systems off their own local fossils; this should
help prevent the latency from adding additional time as binaries are
fetched from the (now "distant") file server.

Thanks!


John
--
"With MPI, familiarity breeds contempt. Contempt and nausea. Contempt,
nausea, and fear. Contempt, nausea, fear, and .." -- Ron Minnich



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [9fans] Inducing artificial latency
  2010-06-16 19:04   ` John Floren
@ 2010-06-16 22:15     ` Francisco J Ballesteros
  2010-06-16 22:31       ` erik quanstrom
  0 siblings, 1 reply; 23+ messages in thread
From: Francisco J Ballesteros @ 2010-06-16 22:15 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

On programs where I control the client and the server, I simulated
it by spawning a process that did sleep and then do the write.

That is, you could send many things at once (i.e., same bandwidth)
but you could pretend the thing was delayed.
One tricky point was to be sure that sends were still in order, but that
was a non-issue in my case.

Perhaps being able to trigger delays on ip for testing/measuring
with a ctl would be a lot better, in the line of what you've done.

On Wed, Jun 16, 2010 at 9:04 PM, John Floren <slawmaster@gmail.com> wrote:
> On Tue, Jun 15, 2010 at 9:45 PM, Devon H. O'Dell <devon.odell@gmail.com> wrote:
>> 2010/6/15 John Floren <slawmaster@gmail.com>:
>>> I'm going to be doing some work with 9P and high-latency links this
>>> summer and fall. I need to be able to test things over a high-latency
>>> network, but since I may be modifying the kernel, running stuff on
>>> e.g. mordor is not the best option. I have enough systems here to do
>>> the tests, I just need to artificially add latency between them.
>>>
>>> I've come up with a basic idea, but before I go diving in I want to
>>> run it by 9fans and get opinions. What I'm thinking is writing a
>>> synthetic file system that will collect writes to /net; to simulate a
>>> high-latency file copy, you would run this synthetic fs, then do "9fs
>>> remote; cp /n/remote/somefile .". If it's a control message, that gets
>>> sent to the file immediately, but if it's a data write, that data
>>> actually gets held in a queue until some amount of time (say 50ms) has
>>> passed, to simulate network lag. After that time is up, the fs writes
>>> to the underlying file, the data goes out, etc.
>>>
>>> I have not really done any networking stuff or filesystems work on
>>> Plan 9; thus far, I've confined myself to kernel work. I could be
>>> completely mistaken about how 9P and networking behave, so I'm asking
>>> for opinions and suggestions. For my part, I'm trying to find good
>>> example filesystems to read (I've been looking at gpsfs, for
>>> instance).
>>
>> This seems reasonable, but remember that you're only going to be able
>> to simulate latency in one direction as far as the machine is
>> concerned. So you'll want to have the same configuration on both
>> machines, otherwise things will probably look weird. I'd recommend
>> doing this at a lower level, perhaps introducing a short sleep in IP
>> output so that the latency is consistent across all requests. And of
>> course both machines would need to have this configuration. You could
>> have the stack export another file that you can tune if you want to
>> maintain a file-system approach. But I think a synthetic FS for this
>> is overkill.
>>
>> If you're on a local network, this should do the trick, otherwise you
>> may need to do some adaptive latency.
>>
>> --dho
>
> I ended up just sticking a sleep in /sys/src/9/ip/devip.c right before
> data gets written to the device. This seems to add the desired latency
> with no trouble. I've decided that when running tests I'll probably
> just boot the testing systems off their own local fossils; this should
> help prevent the latency from adding additional time as binaries are
> fetched from the (now "distant") file server.
>
> Thanks!
>
>
> John
> --
> "With MPI, familiarity breeds contempt. Contempt and nausea. Contempt,
> nausea, and fear. Contempt, nausea, fear, and .." -- Ron Minnich
>
>



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [9fans] Inducing artificial latency
  2010-06-16 22:15     ` Francisco J Ballesteros
@ 2010-06-16 22:31       ` erik quanstrom
  2010-06-16 22:58         ` EBo
  2010-06-16 23:43         ` Bakul Shah
  0 siblings, 2 replies; 23+ messages in thread
From: erik quanstrom @ 2010-06-16 22:31 UTC (permalink / raw)
  To: nemo, 9fans

On Wed Jun 16 18:17:45 EDT 2010, nemo@lsub.org wrote:
> On programs where I control the client and the server, I simulated
> it by spawning a process that did sleep and then do the write.
>
> That is, you could send many things at once (i.e., same bandwidth)
> but you could pretend the thing was delayed.
> One tricky point was to be sure that sends were still in order, but that
> was a non-issue in my case.
>
> Perhaps being able to trigger delays on ip for testing/measuring
> with a ctl would be a lot better, in the line of what you've done.

the ethernet, or shim ethernet, device seems like a better place for this.
ip is not the only protocol!  that's what loopback(3) does, but without
the real network.  it would be good to plug loopback or similar into a real
ethernet.  it's also worth looking at loopback's implementation strategy,
which allows for µs delays.  sleep is just too course-grained for my
testing.

dialing up random reordering would seem to me to be a feature at
this level.

- erik



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [9fans] Inducing artificial latency
  2010-06-16 22:31       ` erik quanstrom
@ 2010-06-16 22:58         ` EBo
  2010-06-16 23:43         ` Bakul Shah
  1 sibling, 0 replies; 23+ messages in thread
From: EBo @ 2010-06-16 22:58 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs; +Cc: nemo


> the ethernet, or shim ethernet, device seems like a better place for
this.
> ip is not the only protocol!  that's what loopback(3) does, but without
> the real network.  it would be good to plug loopback or similar into a
real
> ethernet.  it's also worth looking at loopback's implementation
strategy,
> which allows for µs delays.  sleep is just too course-grained for my
> testing.
> 
> dialing up random reordering would seem to me to be a feature at
> this level.

out of curiosity, has anyone played with loopback to allow you to change
the order of packets?  Is that even allowed on Plan 9?

  EBo --




^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [9fans] Inducing artificial latency
  2010-06-16 22:31       ` erik quanstrom
  2010-06-16 22:58         ` EBo
@ 2010-06-16 23:43         ` Bakul Shah
  2010-06-17  0:10           ` erik quanstrom
  1 sibling, 1 reply; 23+ messages in thread
From: Bakul Shah @ 2010-06-16 23:43 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1744 bytes --]

On Wed, 16 Jun 2010 18:31:17 EDT erik quanstrom <quanstro@quanstro.net>  wrote:
> On Wed Jun 16 18:17:45 EDT 2010, nemo@lsub.org wrote:
> > On programs where I control the client and the server, I simulated
> > it by spawning a process that did sleep and then do the write.
> >
> > That is, you could send many things at once (i.e., same bandwidth)
> > but you could pretend the thing was delayed.
> > One tricky point was to be sure that sends were still in order, but that
> > was a non-issue in my case.
> >
> > Perhaps being able to trigger delays on ip for testing/measuring
> > with a ctl would be a lot better, in the line of what you've done.
>
> the ethernet, or shim ethernet, device seems like a better place for this.

Agreed.

[Thinking aloud...]
You'd need some plumbing in devether to demultiplex incoming
packets addressed to this device (assuming it has its own MAC
address).

Something like a tap device would allow you to simulate high
latency link, a level-2 bridge or whatever in usercode.

[BTW, is there a strong reason why devether.c is in 9/pc/ and
 not 9/port/?  vlan code can be factored out from various
 ether*.c to a similar portable file (if not devether.c)]

> ip is not the only protocol!  that's what loopback(3) does, but without
> the real network.  it would be good to plug loopback or similar into a real
> ethernet.  it's also worth looking at loopback's implementation strategy,

Or bridge(3).

> which allows for µs delays.  sleep is just too course-grained for my
> testing.
>
> dialing up random reordering would seem to me to be a feature at
> this level.

A user level simulator will make it easy to add random drops,
reordering, filtering, NAT, etc.



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [9fans] Inducing artificial latency
  2010-06-16 23:43         ` Bakul Shah
@ 2010-06-17  0:10           ` erik quanstrom
  2010-06-17  2:02             ` Bakul Shah
  0 siblings, 1 reply; 23+ messages in thread
From: erik quanstrom @ 2010-06-17  0:10 UTC (permalink / raw)
  To: 9fans

> You'd need some plumbing in devether to demultiplex incoming
> packets addressed to this device (assuming it has its own MAC
> address).

i don't think you would.  if you're using the latency device as your
ethernet device, no muxing is required.

> Something like a tap device would allow you to simulate high
> latency link, a level-2 bridge or whatever in usercode.

this is difficult to do in user space for two reasons
1.  the difficulty of getting precision timing.
2.  copies into/out of the kernel.

> [BTW, is there a strong reason why devether.c is in 9/pc/ and
>  not 9/port/?  vlan code can be factored out from various
>  ether*.c to a similar portable file (if not devether.c)]

actually, most of the code has been factored out, but one step
better than you imagine.  most of the code is in ../port/netif.c
which should be a good start on any type of network interface.

what remains is code littered with system dependencies.  it's
worthwhile comparing pc/devether.c with kw/devether.c

by the way, there is no vlan code, as far as i know, which
is absolutely fine by me.

> > ip is not the only protocol!  that's what loopback(3) does, but without
> > the real network.  it would be good to plug loopback or similar into a real
> > ethernet.  it's also worth looking at loopback's implementation strategy,
>
> Or bridge(3).

not sure that is too informative.

> > which allows for µs delays.  sleep is just too course-grained for my
> > testing.
> >
> > dialing up random reordering would seem to me to be a feature at
> > this level.
>
> A user level simulator will make it easy to add random drops,
> reordering, filtering, NAT, etc.

what do you mean by mac-level natting?  something like
hp vc?  seems way too extravagant for a simple packet muncher.

by the way,  if you keep the packets in the order they should
be released from the delay queue, reordering is automatic.

i suppose the delay queue gives new meaning to the term
"storage network".

- erik



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [9fans] Inducing artificial latency
  2010-06-17  0:10           ` erik quanstrom
@ 2010-06-17  2:02             ` Bakul Shah
  2010-06-17  3:32               ` erik quanstrom
  0 siblings, 1 reply; 23+ messages in thread
From: Bakul Shah @ 2010-06-17  2:02 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

On Wed, 16 Jun 2010 20:10:49 EDT erik quanstrom <quanstro@quanstro.net>  wrote:
> > You'd need some plumbing in devether to demultiplex incoming
> > packets addressed to this device (assuming it has its own MAC
> > address).
>
> i don't think you would.  if you're using the latency device as your
> ethernet device, no muxing is required.

Yes, if you dedicate a port. A reasonable tradeoff in some cases.

> > Something like a tap device would allow you to simulate high
> > latency link, a level-2 bridge or whatever in usercode.
>
> this is difficult to do in user space for two reasons
> 1.  the difficulty of getting precision timing.
> 2.  copies into/out of the kernel.

I don't see what "difficulty" has to do with your two points.
The goal here would be flexibility more than performance.
Even so, modern machines are fast enough so lower performance
is not a big issue for many uses. See for instance vde (under
linux and FreeBSD). These days in qemu I can get very decent
network speeds (network installs of freebsd under qemu
operate faster than they did on native machines of a few
years back).

> > [BTW, is there a strong reason why devether.c is in 9/pc/ and
> >  not 9/port/?  vlan code can be factored out from various
> >  ether*.c to a similar portable file (if not devether.c)]
>
> actually, most of the code has been factored out, but one step
> better than you imagine.  most of the code is in ../port/netif.c
> which should be a good start on any type of network interface.

Good!

> what remains is code littered with system dependencies.  it's
> worthwhile comparing pc/devether.c with kw/devether.c

I did.  A lot of the code is the same. More can be factored
out (as the diff shows some fixes appear in one and not the
other).

> by the way, there is no vlan code, as far as i know, which
> is absolutely fine by me.

grep -i vlan /sys/src/9/pc/ether*

> > A user level simulator will make it easy to add random drops,
> > reordering, filtering, NAT, etc.
>
> what do you mean by mac-level natting?  something like
> hp vc?  seems way too extravagant for a simple packet muncher.

I meant IP natting (but doing it in usercode). The point was
something like tap would allow you to munge packets however
you wished and stuff them back into kernel pkt queues.

> by the way,  if you keep the packets in the order they should
> be released from the delay queue, reordering is automatic.

Reorder in this (testing) context is to purposely get packets
out of order.

Plan9 is a very good platform for experimenting and for real
uses.  Even if such experiments fail or are done in areas we
might avoid (such as vlans), it is all good!  If drivers can
be written in user mode, that makes plan9 even more flexible.
Of course, what gets accepted back in standard plan9 code
base is a different matter.



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [9fans] Inducing artificial latency
  2010-06-17  2:02             ` Bakul Shah
@ 2010-06-17  3:32               ` erik quanstrom
  0 siblings, 0 replies; 23+ messages in thread
From: erik quanstrom @ 2010-06-17  3:32 UTC (permalink / raw)
  To: 9fans

> > i don't think you would.  if you're using the latency device as your
> > ethernet device, no muxing is required.
>
> Yes, if you dedicate a port. A reasonable tradeoff in some cases.

you don't need to dedicate a port.  just open protocol -1.

> > > Something like a tap device would allow you to simulate high
> > > latency link, a level-2 bridge or whatever in usercode.
> >
> > this is difficult to do in user space for two reasons
> > 1.  the difficulty of getting precision timing.
> > 2.  copies into/out of the kernel.
>
> I don't see what "difficulty" has to do with your two points.
> The goal here would be flexibility more than performance.

my application requires doing this for (multiple)
10gbe at wire speed.

> > what remains is code littered with system dependencies.  it's
> > worthwhile comparing pc/devether.c with kw/devether.c
>
> I did.  A lot of the code is the same. More can be factored
> out (as the diff shows some fixes appear in one and not the
> other).

i'm sure it's not perfect.  but there are a lot of system
dependencies in there.  i don't think it's worth factoring
more stuff out, but i'd love someone to show that i'm wrong.

>> by the way, there is no vlan code, as far as i know, which
>> is absolutely fine by me.
>
> grep -i vlan /sys/src/9/pc/ether*

if you look carefully, none of this is actually used;
there are vlan tags mentioned in the ring entries, but
they are ignored.  as an example, take the 82598.

78: 	Vfta		= 0x0a000/4,	/* vlan filter table array. */
80: 	Vlnctrl		= 0x05088/4,	/* vlan control */
86: 	Imirvp		= 0x05ac0/4,	/* immediate irq vlan priority */
258: 	ushort	vlan;
279: 	ushort	vlan;

>
> > > A user level simulator will make it easy to add random drops,
> > > reordering, filtering, NAT, etc.
> >
> > what do you mean by mac-level natting?  something like
> > hp vc?  seems way too extravagant for a simple packet muncher.
>
> I meant IP natting (but doing it in usercode). The point was
> something like tap would allow you to munge packets however
> you wished and stuff them back into kernel pkt queues.

if you're doing the delay stuff at the ether level, i fail to see
what ip stuff (nat) has to do with it.

- erik



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [9fans] Inducing artificial latency
  2010-06-15 22:14     ` Gorka Guardiola
@ 2010-06-22 14:53       ` John Floren
  2010-06-22 15:03         ` John Floren
  0 siblings, 1 reply; 23+ messages in thread
From: John Floren @ 2010-06-22 14:53 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

On Tue, Jun 15, 2010 at 10:14 PM, Gorka Guardiola <paurea@gmail.com> wrote:
> On Wed, Jun 16, 2010 at 12:07 AM, erik quanstrom <quanstro@quanstro.net> wrote:
>>> Last time I needed something similar, I just run a modified iostats.
>>
>> how does iostats add latency?
>>
>
> hence the modified.
>

Hi Gorka

Do you still have your modified iostats around? I'm really not sure
about the results I get from adding sleeps in the ip code--it takes
over twice as long to copy a 10M file between my two test machines
(supposed to have 14ms RTT latency added, ping seems to indicate this)
than it does to copy from a system running a non-modified kernel to
go.cs.bell-labs.com (14ms RTT). I'd like to try running the tests with
a different latency method.

Thanks

John
--
"With MPI, familiarity breeds contempt. Contempt and nausea. Contempt,
nausea, and fear. Contempt, nausea, fear, and .." -- Ron Minnich



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [9fans] Inducing artificial latency
  2010-06-22 14:53       ` John Floren
@ 2010-06-22 15:03         ` John Floren
  0 siblings, 0 replies; 23+ messages in thread
From: John Floren @ 2010-06-22 15:03 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

On Tue, Jun 22, 2010 at 2:53 PM, John Floren <slawmaster@gmail.com> wrote:
> On Tue, Jun 15, 2010 at 10:14 PM, Gorka Guardiola <paurea@gmail.com> wrote:
>> On Wed, Jun 16, 2010 at 12:07 AM, erik quanstrom <quanstro@quanstro.net> wrote:
>>>> Last time I needed something similar, I just run a modified iostats.
>>>
>>> how does iostats add latency?
>>>
>>
>> hence the modified.
>>
>
> Hi Gorka
>
> Do you still have your modified iostats around? I'm really not sure
> about the results I get from adding sleeps in the ip code--it takes
> over twice as long to copy a 10M file between my two test machines
> (supposed to have 14ms RTT latency added, ping seems to indicate this)
> than it does to copy from a system running a non-modified kernel to
> go.cs.bell-labs.com (14ms RTT). I'd like to try running the tests with
> a different latency method.
>
> Thanks
>
> John

That was meant to be off-list, but I suck at gmail apparently.

John
--
"With MPI, familiarity breeds contempt. Contempt and nausea. Contempt,
nausea, and fear. Contempt, nausea, fear, and .." -- Ron Minnich



^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2010-06-22 15:03 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-06-15 21:29 [9fans] Inducing artificial latency John Floren
2010-06-15 21:39 ` Jorden M
2010-06-15 22:06   ` Enrique Soriano
2010-06-15 21:39 ` erik quanstrom
2010-06-15 21:53   ` David Leimbach
2010-06-15 21:59     ` erik quanstrom
2010-06-15 21:45 ` Devon H. O'Dell
2010-06-15 22:43   ` John Floren
2010-06-15 23:01     ` Devon H. O'Dell
2010-06-16 19:04   ` John Floren
2010-06-16 22:15     ` Francisco J Ballesteros
2010-06-16 22:31       ` erik quanstrom
2010-06-16 22:58         ` EBo
2010-06-16 23:43         ` Bakul Shah
2010-06-17  0:10           ` erik quanstrom
2010-06-17  2:02             ` Bakul Shah
2010-06-17  3:32               ` erik quanstrom
2010-06-15 21:58 ` Gorka Guardiola
2010-06-15 22:07   ` erik quanstrom
2010-06-15 22:14     ` Gorka Guardiola
2010-06-22 14:53       ` John Floren
2010-06-22 15:03         ` John Floren
2010-06-16  5:20 ` Skip Tavakkolian

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).