[mythtv-users] Proposed future power saving networked configuration (0.22 in mind)

Chris Pinkham cpinkham at bc2va.org
Wed Feb 18 18:20:07 UTC 2009


* On Wed Feb 18, 2009 at 07:40:50AM -0800, Jon Bishop wrote:
> Why wouldn't you want your master to be on your fileserver? That seemed 
> like the logical setup to me, and I've only had to add 1 sata controller 

The fileserver is a fileserver.  I don't want a 15-second rechedule
causing issues with slaves writing files to the NAS filesystems.  I
don't want the database on there either.  I have another VM for the
MySQL database with a guaranteed minimum amount of CPU resources so
that I know nothing else on the VMware server can cause the database to
perform too poorly. (definition of 'too' is left up to each reader)

> I guess my question is simply, why would you want a tunerless master if 
> you plan on running multiple slave systems, and your master is connected 
> to an HDHR that it isn't powering anyway? Of course, I'm not against 
> this, there seems no logical reason not to be able to run a backend 
> without a tuner for scheduling, transcoding and comm flagging and such, I 
> just don't see a need for a tunerless master backend myself.

Putting the HDHR as close to the storage as possible saves Network I/O.
Putting that on a dedicated nic on the fileserver and running a slave
on the fileserver means I have zero network I/O on the main network
when recording something from the HDHR.  Running the master in a VM
means I don't have another server up and running just to be my master.
I don't want the master on the fileserver for the reasons stated above.
Since the master is in a (VMware) VM, it has limited options for
tuners.  If those tuners aren't in use most of the time, why leave them
on 24x7.  Just put them in slaves that can be put to sleep when not in
use.

> Though I think your numbers were pretty arbitrary, I definately see  
> merit to WOL Slaves. I don't see the power savings that removing a  

I put an amp meter on one of these P3 boxes and it was only pulling
around .3-.4 amps with the hard drive, CD-ROM, and floppy disconnected,
so I think my numbers are pretty accurate for my case.

> couple tuners in your masterbackend being nearly as high as the return  
> on having 2 or 3 extra tuners in your slave backends turned off when  
> you're not using them (as well as the system).

The VMware server is up 24x7x365 anyway, so making the master a VM
costs virtually nothing.  This means that being able to turn off
the 3 M179 analog tuners most of the day is an immediate savings.
My VMware server needs to be replaced anyway, it's not new enough
to adjust cpu speed or conserve much power when not in use.  It
uses about 2.4 Amps @ 120V minimum, but goes up to 3 Amps or so
when I have multiple transcoding or commflag jobs running on it.

> See, again, why does the vm on the fileserver have to be the slave?

Nobody said anything about a VM on the fileserver.  I just said I'm
planning on running a slave on the fileserver so that data from the
HDHR doesn't have to cross the main network to get to the disks.

  HDHR -> dedicated network -> fileserver -> mythbackend -> disk

<snip my comments about my nfs root setup>

> This sounds really cool, perhaps you can reply to me off list and  
> describe more how you've got it set up?

It's pretty simple and won't take up much time/space.  I installed
CentOS 5 on a system.  Then copied that drive to my fileserver.
Each nfsroot client gets their own main directory containing a
dedicated /etc, /var/log, and /tmp (also /var/tmp points to /tmp).

The servers PXE boot the kernel and run a special /linuxrc file that
mounts up their dedicated versions of the 3 dirs above.  After that,
it just execs /sbin/init and CentOS boots as normal.  I share a
lot of stuff in /etc/ by pointing to links in /etc.shared which is
on the main nfs mount.  It takes me about 5 minutes to spin up a new
PXE boot server.  I just have to setup the MAC/IP on the DHCP server,
run a script to create the server's dedicated directories, and
create a link under my tftpboot directory so the server gets the
right kernel and boot options.  I actually have a CentOS 5 image
like this and a Fedora 5 image.  I have a couple machines on the
network using the FC5 image still because I haven't installed
NVidia drivers in the CentOS 5 image yet.  I have a PXE-booted
compile-host VM which I use to compile Myth and also install RPMs
into the nfsroot image.  /etc/rc.d/rc.local runs
/etc.shared/rc.d/rc.local.common and then
/etc.shared/rc.d/rc.local.$(hostname).  Some servers run a
/etc.shared/rc.d/mythbackend.$(hostname) script while others run a
common /etc.shared/rc.d/mythbackend.  I have separate scritps so I
can have different logging options on the serves.  I also modified
the mythbackend.* init script(s) to put the hostname in the log
file so I have a common directory will all my
myth((back|front)end|jobqueue) logs going into it.

--
Chris


More information about the mythtv-users mailing list