[mythtv-users] Building a new MythTV Backend for 2022

Mark Wedel mwedel at sonic.net
Sun Jan 9 20:10:17 UTC 2022


On 1/9/22 4:57 AM, Jim Abernathy wrote:
> 
<trimming some of the comments>
> 
> 
> Thanks for you thoughts.  As to mirrored SSDs, nvme or SATA, I've heard that RAID writes even in the unused spaces and reduces the life of an an SSD. What have you heard?
> 
> At present, my Production MythTV/NAS boot SSD is 2.5 SATA and I have a Conezilla image of it and it's only 16GB so I can use the cheapest available size.
> 
> I'm built a test system yesterday with 2 identical WD Hard Drives and a Boot SSD using Ubuntu 20.04.3 and used ZFS. The install for the OS is standard and to create the zpool for the mirrored HDs was one simple command. It even mounts the mirror for you after creation.  One change is the zpool is mounted on /NAS in my case and I would have to setup mythtv directory structure to use that. I'm going to install Mythtv today and see how it performs.  I ran some FIO benchmarks and it's not bad performance compared to bard metal.
> 
> I've even debated getting a real NAS like Asustor AS1104T and just use a simple small PC with Gbe Ethernet and a PCIe slot for the tuner. But the AS1104T is something else to manage.

  I've not had an issue with SSDs running out of read/write cycles.  I have had SSDs fail with other problems, so SSDs are not failure proof.

  Some raid technologies require an initialization of the volume with a known set of data, so that will use up 1 write of the devices.  I don't think ZFS does this, because the volume manager (raid) and filesystem are integrated together.  ZFS also supports SSD operation to tell the SSDs that certain blocks are no longer in use - this then lets the SSD firmware do wear leveling by moving blocks around.  I'm not sure if mdadm supports this.

  Note that you can easily change the ZFS mount points, or make whatever directory structure you want (zfs create ..., zfs set mountpoint=...)



More information about the mythtv-users mailing list