[mythtv-users] Building a new MythTV Backend for 2022

Stephen Worthington stephen_agent at jsw.gen.nz
Tue Jan 11 14:21:31 UTC 2022


On Mon, 10 Jan 2022 10:06:52 +0000, you wrote:

>On 10/01/2022 08:46, Stephen Worthington wrote:
>> 
>> If you want to record to the SSD, then you are likely to hit the
>> lifetime write limit fairly rapidly.  But just running MythTV and
>> normal Linux on an SSD and there are no problems with lifetime.  You
>> still need to worry about it just dying unexpectedly, like any disk
>> drive (or any electronics, for that matter).
>> 
>I would think that is the other way around. Sure you are writing TB chunks to a recording disk but 
>it is written once and then read for a while until deleted. On the other hand that database is 
>getting *hammered* all the time as it updates e.g. seek tables. And do not forget the daily 
>mythfilldatabase updates! Lots and lots of small updates to files and inodes all over the place.
>
>The one thing that you can be certain of with any (currently manufactured) SSD is that it is 
>guaranteed to fail. Once it reaches the lifetime limit then bang! it's gone. On the other hand, a 
>looked after HDD will just keep spinning.
>
>Processor speed and memory increases are such that I don't need that extra disk write speed, not for 
>something as non-critical as mythtv. SSDs undoubtedly have a place for certain use cases but 
>thrashing a media database isn't it, in my view.

In my case, my database is massive and its speed determines the speed
of MythTV.  Without an NVMe SSD, MythTV would be almost unusable for
me now.  Even with the NVMe SSD (which was the fastest available when
I got it), creating a new recording rule now takes about two seconds,
and there are equivalent delays for most things that use the database.
I am looking forward to when I can replace my MythTV box with one that
has an SSD that runs at three times the current speed, as that will
make it more responsive again.

Databases are in fact a classic case for use of an SSD, depending on
the size and performance requirements.  Most MythTV users have only
small databases where it does not matter much whether a hard drive or
SSD is used.  Mine is in a completely different class:

MariaDB [mythconverg]> select count(*) from recorded;
+----------+
| count(*) |
+----------+
|    50999 |
+----------+
1 row in set (0.001 sec)

MariaDB [mythconverg]> select sum(filesize)/1024/1024/1024 from
recorded;
+------------------------------+
| sum(filesize)/1024/1024/1024 |
+------------------------------+
|           98676.497858861461 |
+------------------------------+
1 row in set (0.056 sec)

(98.7 Tibytes)

MariaDB [mythconverg]> select count(*) from recordedseek;
+-----------+
| count(*)  |
+-----------+
| 435567904 |
+-----------+
1 row in set (0.000 sec)

root at mypvr:/# du -hc /var/lib/mysql/mythconverg
18G     /var/lib/mysql/mythconverg
18G     total


And you are missing how SSDs work.  Flash memory can only be written
one way, typically from a 1 to a 0 (burning down).  To write from a 0
to a 1, the entire flash memory block has to be erased.  Flash reads
are fast, burn downs are slower but reasonably fast, and erases take
ages.  When a write happens that is unable to just burn down existing
flash memory bits to do the write, the data in that block of SSD would
need to be erased before the write could take place in that block.
That would be far too slow - erase times in old flash were measured in
seconds and are still not that fast.  So instead of erasing the
existing block, an SSD simply assigns a new flash block to that
address, copies the data from the old block to RAM and then writes the
changes to the RAM.  The RAM copy of the block is then written to the
new block, and the old block is queued to be erased.  The erasing of
blocks goes on in the background as required, without causing any
performance problems, unless there are no erased blocks available for
a new write.  The SSD operating system keeps track of which flash
blocks are assigned to which addresses, and has more blocks than are
required for the address space it presents to the user, so it has
erased spares available all the time unless there is very heavy write
traffic for a long period.  It also keeps track of how often a block
has been erased and uses the least erased block on the erased block
list for the next block to be written to.  This spreads out the wear
on the blocks so they tend to wear out at a similar rate.  When a
block fails to erase or fails to burn down, it is placed on a "do not
use" list.  Eventually, there are too many blocks on the "do not use"
list and there is no block available to be assigned to an address.  At
that point, the SSD is considered failed.  But all the data is still
in readable blocks and can be copied off.  And if you are monitoring
the SSD with SMART, you will have had lots of warnings that failure is
approaching, so if you are sensible you will have retired the SSD
before then.  So you are no worse off than with a dying hard drive,
which these days also has spare sectors it can swap in to replace
failed sectors.

However, just like with hard drives, if you get a catastrophic failure
at any time, you can lose all the data on a hard drive or on an SSD.
That seems to be what has happened with your SSDs, rather than they
reached the end of their lifetimes.  Many years ago, I had a whole set
of Seagate 7200.11 hard drives do the same (5 or 6 of them) - they
were just a badly designed drive.

You do need to do some calculations before you select an SSD, to see
what lifetime you can expect.  To get a longer lifetime, you just buy
a bigger SSD, even if you do not need the space.  My calculations for
this one said I would get 5-20 years out of it, depending on how my
database grew.  So I am happy with it lasting 5.5 years so far, and I
do intend to replace it (and the motherboard and CPU) in the next year
or so.  I also have some hard drives that are still going after 10
years of 24/7 use, but most tend to fail in the 5-7 year range now -
more modern drives are not built as well.


More information about the mythtv-users mailing list