[mythtv-users] video artifacts on recordings: could building a raid be the cause?
ub40dd at googlemail.com
Wed Dec 27 12:09:52 UTC 2017
On 27 December 2017 at 10:20, Mike Hodson <mystica at gmail.com> wrote:
> On Wed, Dec 27, 2017 at 4:48 AM, UB40D <ub40dd at googlemail.com> wrote:
>> I rebuilt my myth box after several months of downtime and I decided to
>> bite the bullet and finally build a RAID to store the recordings. This is
>> on ubuntu 16.04, using mdadm.
> My personal preference here is ZFSonLinux, which should work fine on
> Ubuntu 16.04. I'll give some reasons in my further answers inline below.
I'll bear that in mind for the next time I build a RAID...
At the moment I'm extremely keen to get back to a working myth box before
the hols are over and I no longer have the time to fiddle with this stuff
RAID5/6 style RAID, while somewhat CPU-using to calculate parity, is
> actually a _huge_ amount of I/O.
> Mirror raid is significantly less I/O for each byte of data read/written,
> but is of course less space-efficient.
Indeed. I can sacrifice a fifth drive to the cause, but I can't afford to
have 32 TB mirrored.
> On the other hand, test recordings made the day before yesterday, before
>> starting to build the raid array, came out without glitches, which seems to
>> point the finger at the raid and also exclude the hypothesis that I should
>> take a ladder and clean the satellite dish or LNB (also, yesterday was a
>> fine day and it wasn't snowing, so that was not the reason either).
>> Building the raid array seems to take 2-3 days (!) so I'm not super-keen
>> to stop it just to check if recordings come out clean. I did have to
>> power-cycle the machine at some point and had to restart building the raid
>> from zero ;-(
> This is my reason for ZFS: it does _not_ need to "sync" itself upon
> creation; it does not need to "drop a disk" if one simple read error occurs.
> It "just handles it" and you can optionally "scrub" at any time to verify
I don't know what "scrub" means but in the tests I did before committing to
using raid (with 5x 2 GB partitions rather than 5x 8 TB drives), after the
array was built it was able to "just handle" an error. I found that if I
had a 6th drive I could add it in as a spare and it would be brought inline
as and when necessary.
(The thing I found needed more work is being notified that something had
>> Does it seem plausible to anyone that building the raid could be the
>> cause for the video artifacts? The machine is a 4-core i5 with 8 GB RAM.
>> Note that the faulty recordings were made to an independent HDD, not to the
>> raid that was being built.
> Entirely plausible, for reasons of I/O load.
> Syncing a RAID is _very_ I/O intensive. You're literally reading every
> block of every disk, and calculating a new value to be written to the
> parity-stripe on RAID5. And with RAID6 you end up writing to 2 parity
> For example, if you have 5 disks, you have a pattern of:
> Disk1 R R R R W R R R R W R R R R W
> Disk2 R R R W R R R R W R R R R W R
> Disk3 R R W R R R R W R R R R W R R
> Disk4 R W R R R R W R R R R W R R R
> Disk5 W R R R R W R R R R W R R R R
> CONSTANTLY until the array is sync'd.
> Each column is a distinct point in time. 4 disks read, one written, in a
> stripe pattern.
How can I establish if the sata card or the motherboard impose limits on
the aggregate i/o bandwidth of the drives connected to them? I imagine they
won't support all drives going at full sata speed (if they did, I wouldn't
get any problems).
>> I can imagine the problem being disk bandwidth rather than CPU. I have 6
>> SATA ports on the MB and 4 on a PCI-E card, all connected to drives (and a
>> 700 W PSU). The raid array uses 5 drives and the recording(s) would use a
>> 6th, with the other 4 being idle. There were at times several simultaneous
>> recordings, all going to the same independent drive.
>> Are you recording _to_ the array?
No, as you later noted I had said.
> If not, then there is likely still some I/O issue if the '6th recording
> disk' (which sounds like precisely 1 drive is taking all the recordings)
> is being bandwidth starved due to limitations in the chipset. I've seen
> Intel's onboard SATA controllers bandwith-limited to less than 'every disk
> at once' and have found that most pre-SATA3 generations max out around
> 500MB/s regardless of what disks or combinations of disks I was accessing.
The motherboard has 2x sata3 and 4x sata2. The PCIe card is 4x sata3. I
never had any illusions that any spinning drive connected to sata-3 would
give me 6 GB/s so I never actually bothered much (and recording a couple of
TV channels at a time doesn't need that anyway). All I did was to stick the
SSD I use to boot on a sata3 slot.
> I had 6 spinning disks on 1 controller, and noticed that I could get
> ~150-180MB/s sequentially read per disk, but if all at once were accessing
> i got maybe 500MB/s total inclusive of any overhead.
> If you can, I highly suggest putting the RAID array split between the
> motherboard and PCIE card,
That's how it is at the moment, if nothing else because the PCIE card has 4
slots and the raid has 5 drives.
> or splitting the single recording drive off to the PCIE card.
That is not the case at the moment. I'm willing to go with your
recommendation, but would you explain why it would be beneficial? I don't
quite get it.
> Finally, I would highly suggest to keep using the single disk, or a
> mirror, to record on, and then only use the RAID5 for long-term storage.
> Writes to RAID5 are also, very I/O intensive, as its doing the R R R R W
> cycle for every single block written.
Yes, that was my plan anyway, primarily because I think that some of my
el-cheapo 8 TB drives (in deference to the "i" in raid) are rather slow.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the mythtv-users