[mythtv-users] How to automatically copy recordings from slave tomaster backend?
Craig G.
craig at goranson.org
Tue Oct 2 23:58:21 UTC 2007
>----- Original Message -----
>From: Jerry Rubinow
>To: Discussion about mythtv
>Sent: Tuesday, October 02, 2007 9:26 AM
>Subject: [mythtv-users] How to automatically copy recordings from slave
>tomaster backend?
>
>.......
>
>The solution?:
>
>I'm thinking that since the frontend machine doesn't have a huge
>drive, it can probably hold whatever I want to record in any given
>day, but not much more than that. So stop NFS mounting the MBE's
>drive and record locally. Then run a job in the middle of the night
>that copies all the local recordings to the MBE and adjusts the
>database accordingly.
>
>I probably want some post-record job that updates a table of "things
>to be copied" so it doesn't try to copy anything that's still being
>recorded.
>
>Updating a list of programs to copy, easy. Having a cron job to do
>the copy, easy. Updating database - ???. I see I'd probably have to
>change the hostname field in the recorded table for each copied
>program. What else would I need to update?
>
>-Jerry
Jerry,
If you are running one of the newer SVN version of mythtv then its easy
enough to move files from one system to another. I use a setup where I
record and do transcoding of content on my backend, then move it off to to a
networked attached HDD.
The secret is setting up two storage groups
(http://www.mythtv.org/wiki/index.php/File_storage and
http://www.mythtv.org/wiki/index.php/Storage_Groups). In the SVN version
you can have multiple storage groups (directories) and when you go to play a
recording mythtv will search the storage directories to find the file. So
if you move a file between storage directories, you don't seem to have to do
any database updates.
Example implementation in your case:
Storage Group: Frontend/Slave
Directory Path: /myth/tv
Storage Group: Backend/Master
Directory Path /myth/tv2
On your frontend /myth/tv would be a local directory and map /myth/tv2 to
your NFS share on your backend. Then on your backend have /myth/tv map to
the NFS share on your frontend, and /myth/tv2 would be local to it.
On your frontend create a directory called /myth/tomove and make it owned by
your mythtv group.
Configure mythtv to run a job after all our recordings. Have the command
the job runs set to:
touch /myth/tomove/%FILE%
When that job runs it will create a dummy file in your "tomove" directory.
Since jobs don't run untill after program is done recording, it will only
show up in the tomove directory once recording is over.
Then in a cron job at what at like 3:00am run a script like:
#!/bin/bash
FILES=`ls /mythtv/tomove`
for file in $FILES
do
echo Moving File $file
mv /myth/tv/$file* /myth/tv2
rm /myth/tomove/$file
continue
done
The above is a highly simplified version of what I am doing, but should work
for you. I haven't had any problems with my setup here which includes two
fronends, one slave backend, one primary backend, and a simpleshare 500gb
network attached drive running NFS. There is a field in the database that
shows where the program was originally recorded, but haven't had to change
that when I moved the file to a different storage group. The version of
mythtv in SVN just finds the file per the notes in the above links. The
basic concept should work for you with a bit of tweaking for your unique
deployement. For me it minimizes impact on all systems involved when it
comes to moving files around.
An additional benefit of the storage directory model for you, is that your
frontend would record to the local directory storage group first. But if
it did happen to get full before the "move" job runs at night, it would use
the secondary storage directory over NFS to your backend, so you wouldn't
lose a recording. Then the next recording that started after the move cron
job had freed up space, would go back to using your local directory.
Craig
More information about the mythtv-users
mailing list