[mythtv-users] mythconverg_backup.pl inefficiencies

Will Dormann wdormann at gmail.com
Thu Nov 19 01:48:48 UTC 2015


Hi folks,

I recently noticed that mythconverg_backup.pl was taking a long time to
complete.   As I was looking at the output directory, I noticed that the
backup has two distinct steps:

1) Backup the uncompressed database dump to the output directory
2) Compress the database dump

Which seems very un-unix-like to me.  Granted, the backup was noticeably
slow for me for two reasons:

1) My backup target directory is a 100Mbit-mounted network share (don't
ask... my ION doesn't play well at gigabit speeds)
2) My mythtv installation is many years old, so the database has grown
quite large.

But it can be done better.  In particular, I just took line 1252 and
made it:

               "'$safe_db_name' | /usr/bin/pigz >'$output_file.gz'";

and then commented out near the end of the script:
	#compress_backup;


Sure, it's a little hacky but the speed improvement is noticeable.
Doing the compression piped inline with the dump saves the extra step
and bandwidth to transfer the uncompressed data *twice* (once to ouput
the data, and once again to read the data to compress it), and using
pigz is more efficient by using all available CPU cores.

Is there a reason why the current script does the backup in two steps?


Thanks
-WD


More information about the mythtv-users mailing list