[mythtv-users] **Update - FIXED** mythbackend not responding -and even more weirdness

Mark Knecht markknecht at gmail.com
Thu Dec 28 13:56:06 UTC 2006


On 12/27/06, devsk <funtoos at yahoo.com> wrote:
>
>
> > All of that loading typically happens right after 9PM or 10PM for
> > obvious reasons. The load goes up on the backend, but it's not at
> > 100%. The machine acts fine. It's response, etc., but mythbackend quits.
>
> I think I misunderstood your problem then. Your problem is that ONLY
> mythbackend process is quitting/dying under load in SMP but is fine under
> uniprocessor mode.

NO - I don't know that. I've only run mythbackend under SMP. I've
never run it under a UP kernel.

> But the kernel settings that I mentioned may be still
> relevant because mythbackend typically is a hugely multi-threaded
> appilication (at idle it has like 15 threads) and under load, scheduling may
> become more pronounced. One thread getting killed is enough to bring the
> whole process down. Moreover, the kind of load you are putting it thru also
> means that the IO subsystem is overloaded as well.

And my I/O subsystem is more complex. I'd like to change it. The
backend machine has 2 PVR's in it but all the actual storage is on the
network mounted with an NFS mount. The backend machine itself didn't
have enough storage and needs a larger disk so I save all the video to
a file server over 100Mb Ethernet.

>
> One more thing you might wanna check is if there are OOM-killer messages in
> your /var/log/messages. There was a recent OOM-killer bug fixed in kernel (
> Linux 2.6.19-rc4) where it incorrectly killed a process even when there was
> enough memory available. I have had firefox disappear suddenly in front of
> me in the past because of OOM killer. I have since then put
>
> vm.overcommit_memory = 2
> vm.overcommit_ratio = 90
>
> in my /etc/sysctl.conf to effectively (still not completely) disable memory
> overcommit and resort to more closer to traditional Unix memory allocation
> where in malloc returns non-null only if there is enough memory available,
> and fork fails if there is not enough memory available etc.. This way you
> avoid random process kills when under memory pressure, but take the memory
> failure hits upfront. Which of the two is better is a trade-off and very
> subjective. I like apps to fail upfront with ENOMEM under memory pressure
> and let the already running processes keep running. Apparently, linux
> defaults to overcommit memory to begin with (so malloc may return non-null
> although all the needed vm may not be available) and later kill processes
> depending upon a heuristic based OOM score.
>
> -devsk
>

Again, thanks for the info. I'll be checking it out.

cheers,
Mark


More information about the mythtv-users mailing list