From: "Theodore Ts'o" <email@example.com>
To: Norman Wilson <firstname.lastname@example.org>
Subject: Re: [TUHS] Is it time to resurrect the original dsw (delete with switches)?
Date: Mon, 30 Aug 2021 10:42:53 -0400 [thread overview]
Message-ID: <YSzubTwEY5YNSfi6@mit.edu> (raw)
On Mon, Aug 30, 2021 at 09:06:03AM -0400, Norman Wilson wrote:
> Not to get into what is soemthing of a religious war,
> but this was the paper that convinced me that silent
> data corruption in storage is worth thinking about:
> A key point is that the character of the errors they
> found suggests it's not just the disks one ought to worry
> about, but all the hardware and software (much of the latter
> inside disks and storage controllers and the like) in the
> storage stack.
There's nothing I'd disagree with in this paper. I'll note though
that part of the paper's findings is that silent data corruption
occured at a rate roughly 1.5 orders of magnitude less often than
latent sector errors (e.g., UER's).
The probability of at least one silent data corruption during the
study period (41 months, although not all disks would have been in
service during that entire time) was P = 0.0086 for nearline disks and
P = 0.00065 for enterprise disks. And P(1st error) remained constant
over disk age, and per the authors' analysis, it was unclear whether
P(1st error) changed as disk size increased --- which is
representative of non media-related failures (not surprising given how
much checksum and ECC checks are done by the HDD's).
A likely supposition IMHO is that the use of more costly enterprise
disks correlated with higher quality hardware in the rest of the
storage stack --- so things like ECC memory really do matter.
> As Ted has said, there are philosophical reasons why some prefer to
> avoid it, but if you don't subscribe to those it's a fine answer.
WRT to running ZFS on Linux, I wouldn't call it philosophical reasons,
but rather legal risks. Life is not perfect, so you can't drive any
kind of risk (including risks of hardware failure) down to zero.
Whether you should be comfortable with the legal risks in this case
very much depends on who you are and what your risk profile might be,
and you should contact a lawyer if you want legal advice. Clearly the
lawyers at companies like Red Hat and SuSE have given very answers
from the lawyers at Canonical. In addition, the answer for hobbyists
and academics might be quite different from a large company making
lots of money and more likely to attract the attention of the
leadership and lawyers at Oracle.
next prev parent reply other threads:[~2021-08-30 14:43 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-08-30 13:06 Norman Wilson
2021-08-30 14:42 ` Theodore Ts'o [this message]
2021-08-30 18:08 ` Adam Thornton
2021-08-30 16:46 ` Arthur Krewat
-- strict thread matches above, loose matches on Subject: below --
2021-08-29 22:12 Jon Steinhart
2021-08-29 23:09 ` Henry Bent
2021-08-30 3:14 ` Theodore Ts'o
2021-08-30 13:55 ` Steffen Nurpmeso
2021-08-30 9:14 ` John Dow via TUHS
2021-08-29 23:57 ` Larry McVoy
2021-08-30 1:21 ` Rob Pike
2021-08-30 3:46 ` Theodore Ts'o
2021-08-30 23:04 ` Bakul Shah
2021-09-02 15:52 ` Jon Steinhart
2021-09-02 16:57 ` Theodore Ts'o
2021-08-30 3:36 ` Bakul Shah
2021-08-30 11:56 ` Theodore Ts'o
2021-08-30 22:35 ` Bakul Shah
2021-08-30 15:05 ` Steffen Nurpmeso
2021-08-31 13:18 ` Steffen Nurpmeso
2021-08-30 21:38 ` Larry McVoy
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).