[mythtv-users] /find_orphans.py crashes mythbackend

Udo van den Heuvel udovdh at xs4all.nl
Sun Sep 4 04:10:27 UTC 2016


On 02-09-16 17:24, Michael T. Dean wrote:
>> I tried to delete 4000+ orphaned recordings via find_orphans.
> 
> That's a lot of files to be deleting at once--likely far more files than
> your system's limits allow a single user to open at once.  

Does find_orphans manage the amount of files sent to the backend to be
deleted?
Does the backend manage the amount of files sent to mysql to be deleted?
Assuming one can delete all files sent to 'it; in one go at any time is
a weird assumption.

> It's also not
> normal for a MythTV user to have 4000+ orphans, 

So I should not be able to handle 4000 orphans? Never crashed something?
What is the 'normal' limit here?
I see just assumptions and not sane logic and thus no sane code.

> and it's also not normal
> for a user to try to delete 4000+ recordings (orphaned or not-orphaned)
> at a time.

It just is a 100% score DDOS against the backend. Just like the memory
leaks, but far, far quicker.

>> Every time the backend would stop being a process. Exit. Vanish. Go away.
> 
> No, it would "crash"--it was likely was killed by your OS because you or
> your distro maintainers have configured your system to limit file handle
> usage.

It didn't say so. Why not?

> FWIW, your "handiness" with SQL probably left stuff in the DB related to
> the orphans.  

Pleases explain where in the databases, besides, recorded (or what was
the name) table, orphans do hide?
What other option would I have besides find_orphans? (that didn't work)

> There's a reason that find_orphans.py doesn't actually do
> any deletion at all--it just asks the backend to do it--because deletion
> from MythTV requires cleanup of data from multiple connected tables. 

Sure, and mythbackend handles that task very well so that should be the
only way. Another assumption gone wrong.


> Anyway, what do you get from:
> 
> ulimit -Hn
4096

> ulimit -Sn
1024

> grep nofile /etc/security/limits.conf
#        - nofile - max number of open file descriptors

>  and
> cat /proc/sys/fs/file-max
371081

>  or
> sysctl fs.file-max
fs.file-max = 371081

> Note, too, that the ulimit -n will show the limit for the current shell
> and any processes started by that shell, so, really, to get the most
> useful information, you should find out what ulimit -n gives for the
> shell that starts mythbackend process.  

Probably a root shell so I ran this from a root shell.

> But, in the absence of better
> information, the ulimit -n information run from a login shell (at
> minimum su - mythtv )

mythtv user shows same limits.

> If you change the limits to allow the user running mythbackend to have
> sufficient files open for the task you're hoping to achieve (in other
> words, if you configure your system to allow it to do what you
> ask/expect of it), do you still get a crash?

Allowing more files than default is a workaround at best.
Not erasing all files at once but in smaller chunks or even one by one
(no one ever erases 4000...) should work better.
find_orphans does not find any orphans due to my 'handiness'.
So I cannot reproduce the issue.

Udo


More information about the mythtv-users mailing list