9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
From: erik quanstrom <quanstro@quanstro.net>
To: john@degood.org, 9fans@9fans.net
Subject: Re: [9fans] just an idea (Splashtop like)
Date: Mon,  3 Aug 2009 00:38:02 -0400	[thread overview]
Message-ID: <93911a18f997c5389110851f30a25fc0@quanstro.net> (raw)
In-Reply-To: <4A7646DB.7020403@degood.org>

> For the Intel SSD one must also consider:
>
> > 3.5.4 Write Endurance
> > 32 GB drive supports 1 petabyte of lifetime random writes and 64 GB drive supports 2 petabyte of lifetime random writes.
> That is equivalent to writing the capacity of the SSD 31250 times.  At
> the specified random 4K write rate of 3300 IOPS one could wear out the
> SSD in 876 days.  Non-random writes could cause more rapid wear,
> depending on their pattern and the wear leveling algorithms in the SSD.

do you think this is a serious limitation?  by my calculation, assuming
that you read everything written at least once and 10x faster read than
write leading to 3300 iops taking 1.1s
	1000^5 bytes /(3300 s^-1 * 1.1^-1 * 4*1024 bytes)/86400s/day
		= 942 days
this is 153 days short of the product lifetime.  by the way, one
would expect ~8 ures during this test (8e15 bits/1 ure/1e-15 bits).
(http://download.intel.com/support/ssdc/hpssd/sb/english_ssd_3_year_warranty.pdf)

do you really think its reasonable that someone could run this
drive at 100% of capacity for 2½ years?  even allowing for shipping
and installation time will get you pretty close to the warranty.
can you think of how this could be done with a plan 9 application
that's doing something useful?

it's hard to know if non-random writes create more wear than intel
specifies or not.  strictly sequential i/o should create similar wear
because 16 4k writes can be combined into one flash cycle and
16*3300*4k is about 216 mb/s.  so i don't see how you can get in
more flash cycles than 3300/s and increase the wear rate.

- erik



  reply	other threads:[~2009-08-03  4:38 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-07-31 17:50 David Leimbach
2009-08-01  1:56 ` J.R. Mauro
2009-08-01  3:12   ` ron minnich
2009-08-01 11:58     ` erik quanstrom
2009-08-01 14:51       ` tlaronde
2009-08-01 14:57         ` erik quanstrom
2009-08-01 15:49         ` ron minnich
2009-08-01 19:10           ` J.R. Mauro
2009-08-02 18:17             ` erik quanstrom
2009-08-02 19:23               ` Anthony Sorace
2009-08-02 22:16                 ` erik quanstrom
2009-08-03  0:23                   ` Devon H. O'Dell
2009-08-03  0:37                     ` erik quanstrom
2009-08-03  2:09               ` John DeGood
2009-08-03  4:38                 ` erik quanstrom [this message]
2009-08-03 13:54                   ` John DeGood
2009-08-08  4:46               ` Dave Eckhardt
2009-08-08 11:37                 ` erik quanstrom
2009-08-02  2:55   ` John DeGood
2009-08-02  3:16     ` J.R. Mauro

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=93911a18f997c5389110851f30a25fc0@quanstro.net \
    --to=quanstro@quanstro.net \
    --cc=9fans@9fans.net \
    --cc=john@degood.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).