[mythtv-users] Low Power System

Gary Buhrmaster gary.buhrmaster at gmail.com
Tue Mar 14 19:20:49 UTC 2017


On Thu, Mar 9, 2017 at 8:15 AM, Simon Hobson <linux at thehobsons.co.uk> wrote:
....
> But, my understanding of how SSDs work is this - and I'm ready to be shot down if it's wrong :

The problem is that SSDs have multiple implementations,
and every generation may be different (different better,
or different worse) for some specific case.

What you suggest regarding remapping of writes and
constant background wear leveling is reportedly common
in most current generation firmwares.

Now, depending on how the SSD is implemented, some
can do writes that in the background even if it loses power
(models with various forms of power failure protection)
and some can not.  If they can, all is good.  They can
immediately tell you they are done.  If they can not, they
*should* not report the write is complete until they complete
the read/update/write, which can be quite slow.  Some
lower end consumer devices used to just lie, and the
result (in the case of power failure) was corruption,
sometimes very bad corruption.  Most current generation
devices are better about such things now (they at least do
not lie, and there should be no data corruption if you run
with barriers), and there are tricks (such as using the parts
of the SSD that are SLC, and are used for the various
internal mapping functions, for fast temporary writes) that
can speed up the writes for a period.

Some drives have enough spare chunks so regardless
of what you think you are using, there are sufficient
spares to handle the occasional burst of activity.  They
were commonly easy to identify because they had
sizes like 500GB (when there was 512GB flash on
the drive), and some manufacturers would subject
even more, for more available chunks (sometimes for
write speed up, sometimes because they needed the
extra flash to handle expected failures).  So they always
have available spare chunks (both to handle flash
failures, and for having a few spare sectors around
to speed up writing).

Trim can be both of benefit and a curse, again, depending
on the firmware.  Trim can help the drive know some
blocks (which might end up being chunks) can be placed
into the "can be erased" list, but some firmwares make
the updating of that list more synchronous than others
as it updates its internal lists.  Those drives actually
suck when trim/discard is enabled (everything stalls
while if collects and manages that new list).  Other drives
do all the work in the background (after all, if they miss
adding the chunk to the list, it is no worse than never
having been told the chunk is free in the first place),
so trim is "free" (well, mostly free).

"Enterprise" drives have a slightly different design point
than you mention and can maintain the highest write
speeds even when full.

But the problem is that every variant is unique.  Saying
that SSDs are great, or fail miserably, are both (as likely
as not) true statements for the specific examples.
But are no more a generally accurate statement than
the person who claims all Western Digital drives are
crap because the one they bought failed after 2 months
losing all the family photos, and one should never
again buy Western Digital hard drives.

It should be noted that every generation of SSD design
(and firmware updates) gets better at addressing the
cases where the performance is poor in consumer
OS's (which mostly means Windows, but most of the
same improvements help Linux too).  Most of the
horror stories (say the OCZ death-SSDs, equaled only
by the IBM deathstars) are no longer representative of
current generation devices.  That says nothing about
any particular new SSD that comes to market tomorrow,
which could end up being the new death-SSD.

For OS boot purposes, I tend to purchase (new, or low
hour) surplus enterprise SSDs (that were pulled from
servers for immediate upgrade reasons).  Small
enterprise SSDs can be found in the "new to you"
marketspaces for low cost.  I recently picked up an
Intel 80GB S3500 for about $35 (including shipping),
which had a total number of power on hours in the
dozens.  For all purposes NIB.  It is sufficiently large
enough for an OS boot drive for systems that cannot
pxe boot (and the lowest power, quietest, boot drive is
still going to be no drive at all).

So, yes, read up on the experiences, but be sure to
understand that those experiences are with small
sample sizes, for generations of devices that likely
no longer exist.

AFAIK there is no (currently publicly available) list
of SSD reliability equivalent to the BackBlaze
reports for HDDs, which includes sufficient sample
sizes to draw statistically significant conclusions.  I do
not suppose anyone knows of such a public list, do
you?


More information about the mythtv-users mailing list