From mboxrd@z Thu Jan 1 00:00:00 1970 From: dexen deVries To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> Date: Tue, 4 Oct 2011 14:52:49 +0200 User-Agent: KMail/1.13.6 (Linux/3.1.0-rc8-l40+; KDE/4.5.5; x86_64; ; ) References: In-Reply-To: MIME-Version: 1.0 Content-Type: Text/Plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <201110041452.50242.dexen.devries@gmail.com> Subject: Re: [9fans] copying fossil filesystem to a bigger disk Topicbox-Message-UUID: 31a04b9e-ead7-11e9-9d60-3106f5b1d025 On Tuesday 04 of October 2011 14:33:27 Richard Miller wrote: > > 250000*4096/20.78 =3D 49 mb/s. this is less than 1/2 the available > > bandwidth giving the drive lots of wiggle room. and since you're > > doing sequential i/o the drive can do write combining. >=20 > Is there any experiment I can do (not involving a crowbar and a > microscope) to find out the real physical sector size? Bigger > transfers get more of the bandwidth, but then a smaller proportion > of the transfer needs read/modify/write. I could do random addressing > but then I would expect seek time to dominate. compare write bandwidth for 4096B data chunks at offset modulo 512B: (n*512B+k), where `n' is random. compare 8 runs, each with const `k' from {= 0,=20 1, ...7 }. sync after every write. write much more than drive cache size at= =20 each run (probably 64MB on modern HDs). if your drive is 4k, one of the runs will be at exact sector boundary, and = 7=20 others will read-modify-write two sectors every time. thus one run will hav= e=20 much better performance. if your drive is 512b, all runs will have same performance. =2D-=20 dexen deVries [[[=E2=86=93][=E2=86=92]]] http://xkcd.com/732/