[mythtv-users] Building a new MythTV Backend for 2022

Simon linux at thehobsons.co.uk
Wed Jan 12 21:39:38 UTC 2022


Hika van den Hoven <hikavdh at gmail.com> wrote:

>> Or it could fail next week - but it’s OK, they’ll send you a nice new but blank one to replace it.
>> Kingston have just sent me a new 240G drive to replace one that
>> failed. In this case I could probably recover the data because it
>> works for a while and then “just disappears” off the bus - not that
>> I need to as it was half of a mirrored pair.

> Very probably this is a sata controller or cable issue. I have
> encountered those and since I replaced the controller it's as stable
> as a rock.

No, definitely the drive. Tried the usual swap drives round business and the fault moved with the drive, didn’t stay with the port or cable.



Mark Wedel <mwedel at sonic.net> wrote:

> What does not quite seem to work fine is the updating of grub - maybe it has changed, but it used to be the case that it would only update the MBR of one of the drives, and I would have to run the command explicitly to update the MBR on the second drive in the mirror.

That is correct, You need to run “grub-update /dev/sdX” for each drive in the mirror. I once had a machine that had grub installed on 5 drives - just because there were 5 drives in a RAID5 for the main storage, and to keep things simple I just mirrored a /boot across all 5 of them.
As you say, it’s “annoying” if you’ve forgotten this and the one drive with grub on it fails. It’s annoying, but less so, if one of your boot drives fails and the machine then tries to boot from one of your data drives due to the “boot order” setting in BIOS.



On the subject of drive speed and database size, lots of RAM helps here. If you have enough RAM to keep the database (most of the time) in RAM then speed is massively improved - you need to do some tweaking of DB engine settings to maximise this. A few weeks ago I was talking with someone who, for work, was working with a machine with multi-TB of RAM for that very purpose !
<rambling OT anecdote mode>And a good few years ago now I ran a system with SCO OpenServer (back when they did good software and didn’t sue their customers). That was limited to 460,000k of cache (funny how things like that get burned into memory !) which was statically configured - and yes it was a hard limit, I once tried setting it larger and it wouldn’t boot. That was fine “most” of the time - until a report was run that (thanks to inefficient tooling) did a full (non-indexed) join & select with a GB table - that basically stopped the machine with 99 to 100% WIO and we’d know it was happening as the alarms (otherwise known as telephones) started ringing. I later re-wrote that report in Informix, taking care to use indexes, and got the runtime down from 40 hours to 90 seconds - and it could be run during working hours.

Related to that, if the database is idle(wish), then over time its cache will get bumped out in favour of recordings. Is there a simple way to limit that ?


Simon



More information about the mythtv-users mailing list