[mythtv-users] Back-end Virtualization

Raymond Wagner raymond at wagnerrp.com
Fri May 11 19:19:54 UTC 2012


On 5/11/2012 14:34, Nathan Hawkins wrote:
> @Ray - you mention several  good reasons why virtualization
> isn't desirable, and I agree that for the majority of people
> its not the way to go because they simply don't have the raw
> computing needs that some others do. In my case its ALL about
> efficiency... Strictly from a power perspective (which
> translates to TCO), it costs less to virtualize. I, however,
> demand my hardware to actually/continually work rather than
> sitting idle 99.9% of the time. Virtualization allows me to
> get far more efficiency for the same hardware, thus TCO goes
> down while pushing my gear to do the same amount of work on
> less hardware and power.

You're missing the whole point I've been trying to make.  Virtualization 
doesn't do any of that.  Reduced redundant hardware needs, improved 
hardware utilization, it comes from just moving multiple applications 
onto a single piece of hardware.  In a commercial setting, 
virtualization merely allows you to do that while retaining the old 
security levels you got from having physically independent systems.

> They all have their function, but you simply cannot have one
> huge beefy master server doing it all... I've tried it and it
> simply doesn't work. If you take that same beefy server and
> dice it up into VM's...you get soooo much more out of that
> hardware.

That makes no sense what so ever.  A single Linux server is not 
sufficient to run multiple independent applications, but take that same 
hardware, run all the same applications, but now each with its own 
kernel performing redundant duties, and wrap them all in a hypervisor 
running many of those same scheduling, memory, and IO management tasks, 
and somehow it all runs better?  Something doesn't add up there.

> So, the real question is why NOT virtualize? These days,
>  computers are becoming so much more necessary in day to
> day life that I find it increasingly necessary to become
> increasingly more efficient.

And that's what I've been trying to explain.  Virtual machines get you 
some high availability features, if you're willing to pay a whole lot 
for your solution, and which have little to no benefit for MythTV users.

Virtual machines get you nearly all the same level of security that 
independent physical servers used to, but when MythTV doesn't even 
attempt to restrict access to be called "insecure", use to that end is a 
bit misguided.

Virtual machines allow you to run different system architectures and 
different operating systems, such as a Linux mythbackend and a Windows 
domain controller.  That one is a bit more difficult to argue against 
for a home user, but using Samba, running a second physical server, or 
running a virtual machine on top of a Linux host rather than a shared 
hypervisor would all be potential alternatives.

Virtual machines allow you to test things that could potentially crash 
your operating system without affecting other tasks running on the same 
hardware, but that's something that would be more common for development 
or testing purposes, rather than something you would expect to 
experience on a production system.

The real, key feature everyone seems to gung ho about in regards to 
virtual machines is the ability to run isolated Linux installs, where 
changes to shared libraries for one application will not screw up 
another application.  This can be done just as well without running a 
virtual machine, and when MythTV needs hardware access that causes all 
sorts of problems with trying to use virtual machines, it seems the 
sensible solution is to simply not do so.

> Combine that with I really like Hauppauge and the answer is this
>  USB 64 bit 950 (now the 950Q).

Actually, the 950 and 950Q are two very different devices, despite the 
similarity in their name.  The 950 is an ATSC (broadcast-only) tuner, 
while the 950Q is capable of QAM, or unencrypted digital cable.


More information about the mythtv-users mailing list