[mythtv-users] a new type of error

Stephen Worthington stephen_agent at jsw.gen.nz
Fri Mar 9 04:40:24 UTC 2018


On Thu, 8 Mar 2018 23:06:17 -0500, you wrote:

>> On Mar 8, 2018, at 8:30 PM, Stephen Worthington <stephen_agent at jsw.gen.nz> wrote:
>> 
>> On Thu, 8 Mar 2018 18:31:37 -0500, you wrote:
>> 
>>> I?ve setup my mythtv backend to daily optimize and backup the database.  I have it email me with the results.  So when I don?t get the 2 emails at 7:35am each day, I go looking for problems. The system still records problems and the frontends work for viewing.
>>> 
>>> What I?ve seen lately is I can?t ssh or login at the PC console of the backend, and I get errors on the console like:
>>> Mar  7 12:11:47 mythbuntu kernel: [317584.873671] NMI watchdog: BUG: soft lockup - CPU#4 stuck for 22s! [TFWWrite:25774]
>>> Mar  7 12:12:15 mythbuntu kernel: [317612.872998] NMI watchdog: BUG: soft lockup - CPU#4 stuck for 23s! [TFWWrite:25774]
>>> Mar  7 12:12:55 mythbuntu kernel: [317652.872033] NMI watchdog: BUG: soft lockup - CPU#4 stuck for 23s! [TFWWrite:25774]
>>> Mar  7 12:13:23 mythbuntu kernel: [317680.871359] NMI watchdog: BUG: soft lockup - CPU#4 stuck for 22s! [TFWWrite:25774]
>>> 
>>> The Host name is mythbuntu.  It?s a Ubuntu 16.04 system
>>> 
>>> Once I reboot, the system works fine and I get my optimize emails after a short time.  But a day or so later I notice this problem again.  I have not changed anything, but I have done some updating using:  apt update and apt upgrade.
>>> 
>>> Any ideas??
>>> 
>>> Jim Abernathy
>>> jfabernathy at gmail.com
>> 
>> Is there any more context in the logs?  Googling suggests that
>> TFWWrite is in mythbackend in ThreadedFileWriter.cpp.  I would have
>> hoped there might be a backtrace logged when the first error happened.
>> 
>> You should also check to make sure you have plenty of free disk space
>> to optimize_db to use.  You need to have enough free space for copies
>> of the files for the largest table in your database to be make.  The
>> largest table is always recordedseek.  What do these commands show:
>> 
>> ll -h /var/lib/mysql/mythconverg/recordedseek.M*
>> df -h /var/lib/mysql/mythconverg
>
>-rw-rw---- 1 mysql mysql  12M Mar  8 23:00 recordedseek.MYD
>-rw-rw---- 1 mysql mysql  16M Jan 20 13:49 recordedseek.MYD-180120142626.BAK
>-rw-rw---- 1 mysql mysql  14M Feb  5 08:48 recordedseek.MYD-180205085856.BAK
>-rw-rw---- 1 mysql mysql  11M Mar  8 23:00 recordedseek.MYI
>
>jim at mythbuntu:~$ sudo df -h /var/lib/mysql/mythconverg
>Filesystem      Size  Used Avail Use% Mounted on
>/dev/sda1       451G  4.2G  428G   1% /
>jim at mythbuntu:~$ 

It looks like you do not have a problem with the size of recordedseek
then - you must not have very many recordings at all.  However, those
*.BAK files should not be there - I think they indicate some sort of
problem.  It is likely that they will now hang around forever unless
you clean them up manually.  There may well be other files in the
/var/lib/mysql/mythconverg directory that need cleaning up also.

I am also unsure how you have managed to get recordedseek files that
are that small - how many recordings do you have?  That sort of size
would do for a few recordings only, I would think.  If you can do SQL
commands, what does this show:

select count(*) from recorded;

Here is mine:

MariaDB [mythconverg]> select count(*) from recorded;
+----------+
| count(*) |
+----------+
|    26434 |
+----------+
1 row in set (0.00 sec)

Calculating from that and the size of my recordedseek.MYD file
(6169843925 bytes), I calculate that my recordings take
233,405.6111447 bytes of recordedseek per recording, so if yours are
similar, you have around 53 recordings.  If you have many more, that
would indicate a problem with the recordedseek table.


More information about the mythtv-users mailing list