[mythtv-users] Optimizing performance: xfs fragmentation

Ian Evans dheianevans at gmail.com
Sun May 3 15:11:02 UTC 2020


On Sun, May 3, 2020, 2:10 AM Stephen Worthington, <stephen_agent at jsw.gen.nz>
wrote:

> On Sat, 2 May 2020 23:32:41 -0400, you wrote:
>
> >On Sat, May 2, 2020, 11:23 PM Stephen Worthington, <
> stephen_agent at jsw.gen.nz>
> >wrote:
> >
> >> On Sat, 2 May 2020 14:38:50 -0400, you wrote:
> >>
> >> >A little MythTV math problem.
> >> >
> >> >I have a 9 year old 3 TB recording drive with an xfs file system that's
> >> >never been defragmented and currently has a fragmentation factor of
> >> 95.88%.
> >> >
> >> >If I run xfs_fsr on it for two hours everyday at 5am, how long would
> you
> >> >expect it to take to get to a better place? Days, weeks, months?
> >>
> >> For MythTV recording drives, fragmentation is not usually a problem.
> >> MythTV expires recordings when the free space gets too low (somewhere
> >> below 20 Gibytes), so there is always enough free space for the next
> >> recording.  That usually means that the fragmentation does not get
> >> bad.  I use JFS myself, which is also somewhat resistant to
> >> fragmentation problems.  I do not know enough about XFS to say how
> >> well it handles fragmentation.
> >>
> >> You did not mention how full your recording drive is.  If it has lots
> >> of free space, then fragmentation is not going to be an issue.  But if
> >> you are running it as full as MythTV allows, then depending on how
> >> good XFS is, it could be getting to be a problem, gradually.  But 9
> >> years is a long time - if you have been running it that full for
> >> years, and have not noticed any performance issues, then it is
> >> unlikely to be getting worse and will likely have stabilised at the
> >> current fragmentation level.
> >>
> >> Also, 9 years is a pretty old drive.  I have older, but I am
> >> progressively replacing my older drives - I really do not want one to
> >> fail and lose all the data.  A new drive would be much faster and
> >> could be a lot bigger.
> >>
> >
> >The drive has about 200 gigabytes free out of 3 terabytes.
>
> 200 gigs is a decent amount of free space.  It should be possible to
> defrag reasonably quickly with that much free space available.  And
> there should not be much fragmentation anyway.
>
> >I was actually basing the idea of defragmenting off of the wiki (
> >https://www.mythtv.org/wiki/Optimizing_Performance#XFS-Specific_Tips)
> where
> >one suggestion was to run xfs_fsr for 8 hours overnight via cron. So I'm
> >that's an outdated recommendation?
>
> Not having ever used XFS, I am not the best person to give advice
> about it.  Looking at that page, it seems that you can tell xfs_fsr
> how long to run for when it is defragging.  So if you want to run it,
> I would recommend only doing it when MythTV is not busy.  You can use
> my gaps program:
>
> http://www.jsw.gen.nz/mythtv/gaps
>
> to tell you when you have a gap in your recordings and then run
> xfs_fsr for a bit less than the length of that gap.  You would also
> want to make sure no-one was going to be playing any recordings while
> xfs_fsr was running.  But I would think, based on your existing
> fragmentation, that you would only want to run it very occasionally,
> rather than as a regular cron job.  Defragging puts a lot of use on
> your drive as it can result in a large proportion of the entire
> contents of the drive being moved around on it.  So it is best to only
> do it when it is really needed, otherwise you risk wearing out the
> drive.  A better option than a regular defrag cron job is a cron job
> that will report when the fragmentation level goes too high and let
> you decide how to deal with that.  I have systemd and cron jobs that
> do that sort of reporting via email.  For example, I have a systemd
> job that runs every hour to check if the system drive is getting too
> full.  Setting it up is not too difficult, but it does require a way
> of sending email to be installed.
>
> When I want to run something in an upcoming gap, I use a "sleepuntil"
> script I found on the net:
>
> root at mypvr:/usr/local/bin# cat sleepuntil
> #!/bin/bash
> set -o nounset
>
> ### // Sleep until some date/time.
> # // Example: sleepuntil 15:57; kdialog --msgbox "Backup needs to be
> done."
>
>
> error() {
>   echo "$@" >&2
>   exit 1;
> }
>
> NAME_PROGRAM=$(basename "$0")
>
> if [[ $# != 1 ]]; then
>      error "ERROR: program \"$NAME_PROGRAM\" needs 1 parameter and it
> has received: $#."
> fi
>
>
> CURRENT=$(date +%s)
> TARGET=$(date -d "$1" +%s)
>
> SECONDS=$(($TARGET - $CURRENT))
>
> if [[ $SECONDS < 0 ]]; then
>      error "You need to specify in a different way the moment in which
> this program has to finish, probably indicating the day and the hour
> like in this example: $NAME_PROGRAM \"2009/12/30 10:57\"."
> fi
>
> echo "SECONDS=$SECONDS"
> sleep "$SECONDS"
>
> # // End of file
>
> So if I want to run a job that will take two hours, first I would run
> gaps asking it for gaps of minimum length 2 hours:
>
> root at mypvr:~# gaps 2 | head -n 2
> Searching for a minimum duration of 2:00:00
> Gap:  start=Mon 2020-05-04 01:48:00+12:00  end=2020-05-04
> 07:10:00+12:00  duration=5:22:00
>
> and then I would run the job using sleepuntil:
>
> sleepuntil "2020-05-03 01:49"; myjob
>
> I have put sleepuntil on my web server also:
>
> http://www.jsw.gen.nz/mythtv/sleepuntil
>
> To download and install gaps and sleepuntil on Ubuntu or Debian:
>
> sudo su
> cd /usr/local/bin
> wget http://www.jsw.gen.nz/mythtv/gaps
> chown root:root gaps
> chmod u=rwx,g=rx,o=rx gaps
> wget http://www.jsw.gen.nz/mythtv/sleepuntil
> chown root:root sleepunti
> chmod w=rwx,g=rx,o=rx sleepuntil
> exit
>
> Gaps requires that you have the MythTV Python bindings installed, and
> also a couple of other Python libraries.  Run it manually and it will
> tell you if you need to install anything.  Gaps is only available in
> Python 2 so does not work with MythTV v31 yet - as soon as I have
> upgraded a system to v31 I will be doing a Python 3 version.
>

Stephen,

Thanks for the suggestions. I'll have to really look at it.

Just a quick note:

When I tried to wget the files from your server I got a 404.

I then clicked on the links in the browser and pasted them into files with
nano. After the chown and chmod, I tried running your example gaps command
and got this:

ian at buster:~$ gaps 2 | head -n 2             File "/usr/local/bin/gaps",
line 9
    from __future__ import
                                            ^
SyntaxError: invalid syntax
ian at buster:~$



>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mythtv.org/pipermail/mythtv-users/attachments/20200503/e5a650f8/attachment.htm>


More information about the mythtv-users mailing list