[mythtv-users] Clearing autoexpire list of missing recordings

Eric Sharkey eric at lisaneric.org
Sun Mar 16 14:43:52 UTC 2014

On Mon, Feb 24, 2014 at 10:07 AM, Eric Sharkey <eric at lisaneric.org> wrote:
> On Mon, Feb 24, 2014 at 9:04 AM, Michael T. Dean
> <mtdean at thirdcontact.com> wrote:
>> Alternatively, if it's a lot of files, you can use find_orphans.py (as
>> linked previously), which does work fine with 0.27.
> This hasn't been my experience.  I would go so far as to say it just
> doesn't work at all.  For me, at least.  It may delete a few
> recordings, but it tends to crash the backend and leave most things
> undeleted.
> I'll see if I can run the backend under the debugger and get a stack trace.

I finally had a chance to try this.  After compiling a non-stripped
binary of the backend and running it under gdb, rerunning
find_orphans.py did not cause a crash this time.  The script still
failed, but the backend stayed up.  Where previously it would delete
only a handful of recording entries (typically 2-6), this time the
backend kept going.  The script announced a failure after only about a
minute, but over the next hour or so the backend kept cleaning up,
reducing the list of orphaned recordings from 1033 down to just 294,
where it seemed to stop.

Rerunning find_orphans.py a second time did not produced a failure
message and the remaining entries were eventually cleaned up by the
backend after around another half hour.

I suspect that the script (or associated python libraries) has some
problem when the number of orphaned recordings is large and/or the
backend is busy.  Maybe some timeout is not set high enough.  I don't
believe I'll be able to reproduce this again without deliberately
creating more orphaned entries, which I'm hesitant to do.


More information about the mythtv-users mailing list