From: Jon Steinhart <firstname.lastname@example.org>
To: The Unix Heretics Society mailing list <email@example.com>
Subject: Re: [TUHS] Is it time to resurrect the original dsw (delete with switches)?
Date: Thu, 02 Sep 2021 08:52:23 -0700 [thread overview]
Message-ID: <202109021552.182FqNLB3750785@darkstar.fourwinds.com> (raw)
Hey, wanted to thank everybody for the good information on this topic.
Was pleasantly surprised that we got through it without a flame war :-)
I have a related question in case any of you have actual factual knowledge
of disk drive internals. A friend who used to be in charge of reliability
engineering at Seagate used to be a great resource but he's now retired.
For example, years ago someone ranted at me about disk manufacturers
sacrificing reliability to save a few pennies by removing the head ramp;
I asked my friend who explained to me how that ramp was removed to improve
So my personal setup here bucks conventional wisdom in that I don't do RAID.
One reason is that I read the Google reliability studies years ago and
interpreted them to say that if I bought a stack of disks on the same day I
could expect them to fail at about the same time. Another reason is that
24x7 spinning disk drives is the biggest power consumer in my office. Last
reason is that my big drives hold media (music/photos/video), so if one dies
it's not going to be any sort of critical interruption.
My strategy is to keep three (+) copies. I realized that I first came across
this wisdom while learning both code and spelunking as a teenager from Carl
Christensen and Heinz Lycklama in the guise of how many spare headlamps one
should have when spelunking. There's the copy on my main machine, another in
a fire safe, and I rsync to another copy on a duplicate machine up at my ski
condo. Plus, I keep lots of old fragments of stuff on retired small (<10T)
disks that are left over from past systems. And, a lot of the music part of
my collection is backed up by proof-of-purchase CDs in the store room or
shared with many others so it's not too hard to recover.
Long intro, on to the question. Anyone know what it does to reliability to
spin disks up and down. I don't really need the media disks to be constantly
spinning; when whatever I'm listening to in the evening finishes the disk
could spin down until morning to save energy. Likewise the video disk drive
is at most used for a few hours a day.
My big disks (currently 16T and 12T) bake when they're spinning which can't
be great for them, but I don't know how that compares to mechanical stress
from spinning up and down from a reliability standpoint. Anyone know?
next prev parent reply other threads:[~2021-09-02 15:52 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-08-29 22:12 Jon Steinhart
2021-08-29 23:09 ` Henry Bent
2021-08-30 3:14 ` Theodore Ts'o
2021-08-30 13:55 ` Steffen Nurpmeso
2021-08-30 9:14 ` John Dow via TUHS
2021-08-29 23:57 ` Larry McVoy
2021-08-30 1:21 ` Rob Pike
2021-08-30 3:46 ` Theodore Ts'o
2021-08-30 23:04 ` Bakul Shah
2021-09-02 15:52 ` Jon Steinhart [this message]
2021-09-02 16:57 ` Theodore Ts'o
2021-08-30 3:36 ` Bakul Shah
2021-08-30 11:56 ` Theodore Ts'o
2021-08-30 22:35 ` Bakul Shah
2021-08-30 15:05 ` Steffen Nurpmeso
2021-08-31 13:18 ` Steffen Nurpmeso
2021-08-30 21:38 ` Larry McVoy
2021-08-30 13:06 Norman Wilson
2021-08-30 14:42 ` Theodore Ts'o
2021-08-30 18:08 ` Adam Thornton
2021-08-30 16:46 ` Arthur Krewat
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).