[mythtv-users] Immediate autoexpire recent rec. despite plenty of old rec. to expire

Michael T. Dean mtdean at thirdcontact.com
Wed Oct 30 12:58:37 UTC 2013


On 10/30/2013 07:09 AM, John Pilkington wrote:
> On 30/10/13 09:48, warpme wrote:
>> Hi *
>>
>> I need small consultancy regarding unpleasant issue witch started
>> recently in my production system: random recent recordings are quickly
>> 'disappearing' causing users complain like 'argh my favorite show again
>> wasn't recorded'.
>>
>> Checking on logs it looks like those recording were recorded OK - but
>> they expired immediately (within minutes after rec. finishes).
>> My hypothesis is that myth expiring them for making room for new
>> recordings. But in the same time few weeks old recordings are not 
>> expired...
>> It was surprise for me as on 6T Default SG I have 2T space which is
>> "Used by Auto-expirable Recordings" - so theoretically old. recordings
>> from this pool should be used first for making room for new recordings.
>>
>> My 6T Default SG is sum of 2 volumes: 2T partition used since initial
>> system build (/mythtv/tv) and 4T drive (/myth/tv1) added half Year ago
>> as storage expansion.
>> StorageScheduler is "Combination".
>> Looking on few 'too quickly' expired recordings - they always were on
>> '/mythtv/tv' (old drive).
>>
>> Now I see potential root cause: if 'old drive' is almost full with
>> non-expirable recordings - then if scheduler schedules multiple new rec
>> (my wife can schedule 4-7 concurrent on prime time) - some of them will
>> be assigned to this drive and will cause expiration of other, recent
>> recordings despite other drive (/myth/tv1) had candidates 1-month old
>> rec. to expire. This is just theory.
>>
>> To verify this hypothesis will be good to count space occupied by
>> non-expirable recordings per SG member ( '/mythtv/tv' here).
>> Is there easy way to do this?
>>
>> Generally, looking on this issue from overall perspective: potentially I
>> see where I do mistake: I assume SG is nice way to expand storage when
>> user has lack of space.
>> I was happy not going with things like LVM (as member failure brings
>> whole volume down).
>>
>> Hypothesis from beginning of this post (if verified as correct one)
>> shows that SG is perfect for IO scaling - but not for space scaling.
>> I suspect only proper solution of my issue is moving from application
>> level striping (mythtv SG) to fs level striping (LVM) or mass storage
>> level striping (RAID)...
>> Unfortunately this is quite difficult to do me as I don't have 6T temp
>> storage needed for such conversion.
>>
>> Do anybody see other solution here?
>>
>
> It sounds as if you have a problem with the storage group disk 
> scheduler, on around page 3 of the General menu in mythtvsetup.  I 
> know I lost some recordings when I changed from 'Balanced free space' 
> recently after, IIRC, Mike Dean had said that that one had a bug.  For 
> me the alternatives seemed worse, and I went back.  I can't really 
> investigate further without disrupting things, but that's what I would 
> check.  Of course, I don't know why you are suddenly seeing this.

The bug in balanced free space (and balanced percent free space) only 
affects users with multiple backends who have differently-named 
directories that appear only on one of the backends.

Mike


More information about the mythtv-users mailing list