[mythtv-users] a new type of error
stephen_agent at jsw.gen.nz
Fri Mar 9 01:30:50 UTC 2018
On Thu, 8 Mar 2018 18:31:37 -0500, you wrote:
>Ive setup my mythtv backend to daily optimize and backup the database. I have it email me with the results. So when I dont get the 2 emails at 7:35am each day, I go looking for problems. The system still records problems and the frontends work for viewing.
>What Ive seen lately is I cant ssh or login at the PC console of the backend, and I get errors on the console like:
>Mar 7 12:11:47 mythbuntu kernel: [317584.873671] NMI watchdog: BUG: soft lockup - CPU#4 stuck for 22s! [TFWWrite:25774]
>Mar 7 12:12:15 mythbuntu kernel: [317612.872998] NMI watchdog: BUG: soft lockup - CPU#4 stuck for 23s! [TFWWrite:25774]
>Mar 7 12:12:55 mythbuntu kernel: [317652.872033] NMI watchdog: BUG: soft lockup - CPU#4 stuck for 23s! [TFWWrite:25774]
>Mar 7 12:13:23 mythbuntu kernel: [317680.871359] NMI watchdog: BUG: soft lockup - CPU#4 stuck for 22s! [TFWWrite:25774]
>The Host name is mythbuntu. Its a Ubuntu 16.04 system
>Once I reboot, the system works fine and I get my optimize emails after a short time. But a day or so later I notice this problem again. I have not changed anything, but I have done some updating using: apt update and apt upgrade.
>jfabernathy at gmail.com
Is there any more context in the logs? Googling suggests that
TFWWrite is in mythbackend in ThreadedFileWriter.cpp. I would have
hoped there might be a backtrace logged when the first error happened.
You should also check to make sure you have plenty of free disk space
to optimize_db to use. You need to have enough free space for copies
of the files for the largest table in your database to be make. The
largest table is always recordedseek. What do these commands show:
ll -h /var/lib/mysql/mythconverg/recordedseek.M*
df -h /var/lib/mysql/mythconverg
Here are mine:
root at mypvr:~# ll -h /var/lib/mysql/mythconverg/recordedseek.M*
-rw-rw---- 1 mysql mysql 5.8G Mar 9 14:14
-rw-rw---- 1 mysql mysql 5.3G Mar 9 14:14
root at mypvr:~# df -h /var/lib/mysql/mythconverg
Filesystem Size Used Avail Use% Mounted on
/dev/nvme0n1p4 100G 46G 50G 49% /
So I need 11.1 Gibytes of free space available for optimize_db to
work, but I have lots more than that.
If you have had a free disk space problem, you may now have old
temporary files in /var/lib/mysql/mythconverg that have been left
behind and need to be cleared up.
The automated database backups are one obvious culprit if you are
running out of disk space. If you are in a situation where you are
accumulating more recordings, as your database grows, the backups also
grow, and if they are still in the default /var/lib/mythtv/db_backups
directory, their combined size will increase until there is
insufficient free space for other things to work. Rebooting then
cleans out the files in /tmp which have been left behind when the disk
filled up, and things can work for a few more days until space runs
out again. I have had this pattern happen a couple of times.
More information about the mythtv-users