<div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div lang="EN-US" link="blue" vlink="purple"><br><div><div><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">I’m aware of how hypervisors operate. </span></p>
</div></div></div></blockquote><div><br></div><div>I wasn't trying to be condescending. Your experience running mythtv alongside Virtualbox is vastly different than using a bare metal hypervisor.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div lang="EN-US" link="blue" vlink="purple"><div><div><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">I wasn’t sure if the same PCI latency COULD be induced by running a hypervisor inside the base OS as opposed to running the OS on the hypervisor, which is why I asked for clarification. Each hypervisor/platform have their own intricacies, and I’m not familiar with Xen. I also don’t want to run into potential problems in the future with my own rig. <u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"><u></u> </span></p></div></div></div></blockquote><div><br></div><div>Because a type 2 hypervisor is essentially a program running on a bare metal OS, the hypervisor itself shouldn't cause latency in the host OS (the "base os"). Of course the hypervisor will impact the performance of the host OS... but it shouldnt add any PCI latency. The host OS is the only one that can directly access PCI devices... so it should gets all the time it needs. I hope I explained that clearly?</div>
</div>