[mythtv-users] Recent "pixelation" or "glitches" in recordings (HDHR related?)
Stephen Worthington
stephen_agent at jsw.gen.nz
Thu Apr 25 02:21:11 UTC 2013
On Wed, 24 Apr 2013 15:59:23 -0700, you wrote:
>On Wed, Apr 24, 2013 at 3:04 PM, Mike Perkins
><mikep at randomtraveller.org.uk>wrote:
>
>> On 24/04/13 20:34, Thomas Mashos wrote:
>>
>>>
>>> Why should we do ANY check? I think it's reasonable to expect that
>>>
>>> A) Users shouldn't be touching recording files outside of MythTV
>>>
>>> B) Recording drives are relatively static in their location.
>>>
>>> Because 'bad stuff' happens. A drive can die. Chunks of a file can
>> vanish due to bit rot. A *single bit* on a sector can die, meaning the
>> whole file is unreadable (depending on which bit, of course). The drive in
>> question might be on a slave tuner which is powered off. Users might not
>> touch the files outside of mythtv but other software might when it fails.
>>
>> Users aren't the only reason files can go missing.
>>
>> --
>>
>> Mike Perkins
>>
>>
>> ______________________________**_________________
>> mythtv-users mailing list
>> mythtv-users at mythtv.org
>> http://www.mythtv.org/mailman/**listinfo/mythtv-users<http://www.mythtv.org/mailman/listinfo/mythtv-users>
>>
>
>
>None of those reasons explain why we need a complete scan when entering
>watch recordings. The only time such a scan would ever make sense is either
>A) If mythbackend had the ability to reschedule known bad recordings
>(AFAIK, it does not), or B) mythbackend had the ability to send an early
>warning to the frontend/user (marking a recording with an X that a user
>won't see unless they go into that particular show group doesn't count).
>Displaying an unavailable message when attempting to access the recording
>seems sufficient.
>
>
>>Because 'bad stuff' happens.
>
>>A drive can die.
>Meh. Have the backend check when it's trying to write to or read from a
>file on that drive. IMO It makes more sense to check the drive once than to
>check every recording on the drive.
>
>>Chunks of a file can vanish due to bit rot. A *single bit* on a sector can
>die, meaning the whole file is unreadable (depending on which bit, of
>course).
>worthless. When going into the frontend it isn't doing a complete scan of
>every file (would take way too long on multi-terabyte backends). It's
>probably just checking if each file exists. Further, even if it was doing a
>scan and could identify files that existed but were bad, it doesn't notify
>the user nor attempt to schedule a rebroadcast of it.
>
>>The drive in question might be on a slave tuner which is powered off.
>Almost worthless. The backend knows the slave backend isn't available, so
>should mark the recording as not available without needing to scan for it.
>This is slightly more complicated as you can have shared storage between
>backends, but this shouldn't really be an issue as 95+% of people use a
>single combined Backend/Frontend.
>
>>Users might not touch the files outside of mythtv but other software might
>when it fails.
>I've never seen a single case where any of the MythTV devs were OK with any
>third party application touching the recordings directly, so why is it up
>to mythtv to verify that some third party app didn't screw with the
>recordings? If a user sets up something that screws with mythtv recordings,
>I think it is perfectly reasonable that mythtv would just throw an error
>when attempting to playback that recording.
>
>
>Thanks,
>
>Thomas Mashos
I have to say that I like the way it checks for missing files. I have
had problems with a tuner, for example, where the recording supposedly
started but there was nothing written to the recording file. It was
really good to get an X to show nasty things like that have happened
to the recording. And I have also had one of my drives on an external
mount have a problem where I had knocked its cable and the whole drive
was offline.
And I commonly move files between drives. I do it every time I add a
new recording drive, in order to balance the free storage space. If
you do not do that when you have full recording drives and add an
empty one, then you can wind up with just one drive getting all your
recordings happening on it at once. I regularly have times when I
have 6 or more recordings happening at once (especially during
overlapping pre-roll and post-roll). If they are all going to just
one drive, you *will* get missing bits in one or more files.
However, I also like the idea of having one ultra-fast SSD drive
allowing the magnetic drives to be spun down. I have 6 recording
drives now and if they are drawing say 7 watts each on average while
idle, that is 42 watts continuously 24/7. That works out at NZ$78.42
per year at my current rate of 0.213136 cents per kWh, and I hear that
New Zealand electricity is reasonably cheap compared to many of you
out there. So from an environmental point of view it is a nice idea,
even though it would take several years to pay for itself at current
SSD prices.
One other problem I can see with the SSD idea is that if it ever gets
full before files get moved off it, then recent recordings will get
expired. You would really want the process of moving files to be able
to be triggered automatically if the SSD ever neared the point where
expiry would happen.
More information about the mythtv-users
mailing list