From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on inbox.vuxu.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, HTML_MESSAGE,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H4, RCVD_IN_MSPIKE_WL autolearn=ham autolearn_force=no version=3.4.4 Received: (qmail 9758 invoked from network); 30 Dec 2020 08:12:58 -0000 Received: from tb-ob1.topicbox.com (64.147.108.173) by inbox.vuxu.org with ESMTPUTF8; 30 Dec 2020 08:12:58 -0000 Received: from tb-mx0.topicbox.com (tb-mx0.nyi.icgroup.com [10.90.30.73]) by tb-ob1.topicbox.com (Postfix) with ESMTP id 53D2E2528F for ; Wed, 30 Dec 2020 03:12:56 -0500 (EST) (envelope-from bounce.mM06a2dd85933dbb4fe106607c.r522be890-2105-11eb-b15e-8d699134e1fa@9fans.bounce.topicbox.com) Received: by tb-mx0.topicbox.com (Postfix, from userid 1132) id 50EA1F109DD; Wed, 30 Dec 2020 03:12:56 -0500 (EST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=9fans.net; h=from:to :message-id:references:in-reply-to:date:mime-version :content-type:content-transfer-encoding:list-help:list-id :list-post:list-subscribe:reply-to:subject:list-unsubscribe; s= dkim-1; bh=gwfG7LgyviD387H720w+pLdhXz68fLi7IJGHlTtuYLg=; b=BbcGV JqrLE3dm/pbgJSW3X1T2VTFe175mwc6CxUZTCxnNPHvn8NiBsPOFXUDf/C5Uxddu BY5Xr9wjnYxcWGJFDbId4e2vWBPliiX+pvgZ0uwe/PS+12MPB338rCelVITdZWY1 Yd/wcj5XcT8AkMS6um0Sr2q6wQ7ZiJer2A0ZmQ= From: joey@plan9.white-swift.com To: 9fans <9fans@9fans.net> Message-Id: <16093159720.Df0eA96.488025@composer.9fans.topicbox.com> References: <86tus46ujm.fsf@cmarib.ramside> In-Reply-To: <86tus46ujm.fsf@cmarib.ramside> Date: Wed, 30 Dec 2020 03:12:52 -0500 MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="16093159721.ACBd5F6.488025" Content-Transfer-Encoding: 7bit Topicbox-Policy-Reasoning: allow: sender is a member Topicbox-Message-UUID: d06f71be-4a76-11eb-9872-3309252d11b0 Archived-At: =?UTF-8?B?PGh0dHBzOi8vOWZhbnMudG9waWNib3guY29tL2dyb3Vwcy85?= =?UTF-8?B?ZmFucy9UZTY5YmIwZmNlMGYwZmZhZi1NMDZhMmRkODU5MzNkYmI0ZmUxMDY2?= =?UTF-8?B?MDdjPg==?= List-Help: List-Id: "9fans" <9fans.9fans.net> List-Post: List-Software: Topicbox v0 List-Subscribe: Precedence: list Reply-To: 9fans <9fans@9fans.net> Subject: [9fans] Re: Dual dialing/forking sessions to increase 9P throughput List-Unsubscribe: , Topicbox-Delivery-ID: 2:9fans:437d30aa-c441-11e9-8a57-d036212d11b0:522be890-2105-11eb-b15e-8d699134e1fa:M06a2dd85933dbb4fe106607c:1:5WyHqXmKRSfXADRLLzOVqhB95iATd3r1KgLB6jbG7bI --16093159721.ACBd5F6.488025 Date: Wed, 30 Dec 2020 03:12:52 -0500 MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Do you know if there has ever been a comprehensive evaluation of tuning par= ameters for 9P?=C2=A0 I am sure from my previous post that it is obvious I = am on the newer side of Plan9 users. I feel like part of it could be a configuration issue, that is to say speci= fying the gbe vs ether data type and setting -m to a 9k size.=C2=A0 Additio= nally, would it violate the 9P protocol if you chunked the data first (i.e.= if you have a file with 256 kbytes and ran it over 4 connections with 64 k= bytes each).=C2=A0 There is an overhead dx/dt that would consume the gains = at some point but from a theoretical stand point is it possible?=C2=A0 Or m= ore accurately, would such an approach violate 9p? > celebrate_newfound_speed(); This is honestly phenomenal :) > switch (srv.proto) { case TCP: iosize =3D max(chan.rsize, chan.wsize);=20=20 init_9p_tcp(srv.addr, ver9p, iosize); Again maybe this is ignorance but my understanding was that while Plan9 can= support a lot of things running TCP (for the rest of the world) it support= s and prefers to utilize IL/9P for such a connection.=C2=A0 TCP vis-a-vis r= e-transmission throttling is universally bad, so it might be a function mor= e of TCP then of the Plan9 server.=C2=A0 I once had a dedicated 100G link b= etween Dallas and Denver and it initially pre-tuned only had about 4G in ba= ndwidth (yes, this is not a typo).=C2=A0 Some simple tuning (both Linux dev= ices) got that up to 50G almost immediately.=C2=A0 But TCP was the transpor= t of choice and we never got to the 100G level, there were just too many va= riables and getting close would knock the connection bandwidth way back.=C2= =A0 We only had the link for a short time, so possibly this could have been= worked out but my point is that really anything over 1G copper cables is n= on-trivial when TCP is involved. ~Joey ------------------------------------------ 9fans: 9fans Permalink: https://9fans.topicbox.com/groups/9fans/Te69bb0fce0f0ffaf-M06a2d= d85933dbb4fe106607c Delivery options: https://9fans.topicbox.com/groups/9fans/subscription --16093159721.ACBd5F6.488025 Date: Wed, 30 Dec 2020 03:12:52 -0500 MIME-Version: 1.0 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Do you know if there has ever been a comprehen= sive evaluation of tuning parameters for 9P?  I am sure from my previo= us post that it is obvious I am on the newer side of Plan9 users.

I feel like part of it could be a configuration iss= ue, that is to say specifying the gbe vs ether data type and setting -m to = a 9k size.  Additionally, would it violate the 9P protocol if you chun= ked the data first (i.e. if you have a file with 256 kbytes and ran it over= 4 connections with 64 kbytes each).  There is an overhead dx/dt that = would consume the gains at some point but from a theoretical stand point is= it possible?  Or more accurately, would such an approach violate 9p?<= br />

celebrate_newfound_sp= eed();
This is honestly phenomenal :)

 switch (srv.prot=
o) {
  case TCP:
  iosize =3D max(chan.rsize, chan.wsize);=20=20
  init_9p_tcp(srv.addr, ver9p, iosize);
Again maybe this is ignorance but my understanding was that while Plan9 ca= n support a lot of things running TCP (for the rest of the world) it suppor= ts and prefers to utilize IL/9P for such a connection.  TCP vis-a-vis = re-transmission throttling is universally bad, so it might be a function mo= re of TCP then of the Plan9 server.  I once had a dedicated 100G link = between Dallas and Denver and it initially pre-tuned only had about 4G in b= andwidth (yes, this is not a typo).  Some simple tuning (both Linux de= vices) got that up to 50G almost immediately.  But TCP was the transpo= rt of choice and we never got to the 100G level, there were just too many v= ariables and getting close would knock the connection bandwidth way back.&n= bsp; We only had the link for a short time, so possibly this could have bee= n worked out but my point is that really anything over 1G copper cables is = non-trivial when TCP is involved.

~Joey
= --16093159721.ACBd5F6.488025--