<html><head><meta http-equiv="Content-Type" content="text/html charset=windows-1252"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;"><br><div><div>On 03 Jun 2014, at 05:19, Henk D. Schoneveld <<a href="mailto:belcampo@zonnet.nl">belcampo@zonnet.nl</a>> wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><br>On 02 Jun 2014, at 20:59, Johan Van der Kolk <<a href="mailto:johan.vanderkolk@gmail.com">johan.vanderkolk@gmail.com</a>> wrote:<br><br><blockquote type="cite">Hi,<br><br>I’m running myth 0.27 (v0.27.1-7-g41d04b6) on Ubuntu server 14.04 with an i5(3GHz) and 4GB memory. I’m using 4 DVB-S2 tuners as source.<br>Due to relocation of the dish I have to install a slave backend. I can’t (don’t want to) move my server to the garden shed.<br><br>I have found that recording about 8-10 HD channels simultaneously will start killing processes on the backend (not the myth processes though), but the CPU load of the DVB-s2 stuff by itself requires 100% cpu. Mythcommflagging did not help either, although I could limit the simultaneous jobs to get more breathing space. <br>So my thought was to solve both problems at the same time with a new slave backend, where the dvb-s2 stuff is running, by itself. Storage and commflagging still to be done on the master backend (which has 8TB of ZFS storage)<br><br>I can see two issues now:<br>Bandwidth: When recording 10 channels (what I want to achieve), I estimate worst case (based on what i recorded so far) 3GB per channel per hour, or 833Mb/s one way traffic, without other overhead. And not watching anything…<br></blockquote>10 * 3GB = 30.000MB per hour. 30.000 / (60 * 60)sec = 8,33MB/s<br>Every single current HD should be able to do that if it doesn’t write randomly, in small blocks. I use xfs, where it is possible to allocate blocksizes of 1M - 500M to solve fragmentation of recordings. see: <a href="http://www.mythtv.org/wiki/Optimizing_Performance">http://www.mythtv.org/wiki/Optimizing_Performance</a> <br></blockquote><div><br></div>For those who understand bonnie ( i don’t). This is a command line taken from <a href="https://calomel.org/zfs_raid_speed_capacity.html">https://calomel.org/zfs_raid_speed_capacity.html</a>.</div><div>I ran it on my zfspool.</div><div><span style="font-family: Menlo; font-size: 18px;"><br></span></div><div><span style="font-family: Menlo; font-size: 18px;">Version </span><span style="font-family: Menlo; font-size: 18px;">1.97 </span><span style="font-family: Menlo; font-size: 18px;"> </span><span style="font-family: Menlo; font-size: 18px;">------Sequential Output------ --Sequential Input- --Random-</span></div><div><div style="margin: 0px; font-size: 18px; font-family: Menlo;">Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--</div><div style="margin: 0px; font-size: 18px; font-family: Menlo;">Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP</div><div style="margin: 0px; font-size: 18px; font-family: Menlo;">Zolder-server 80G 318688 37 136544 25 269337 20 38.6 2</div><div style="margin: 0px; font-size: 18px; font-family: Menlo;">Latency 4301ms 1176ms 549ms 658ms</div></div><div><br></div><div><br></div><div><br><blockquote type="cite"><blockquote type="cite">How to solve this, and does myth traffic between master and slave benefit from Jumbo Frames. I could use two network cards in each machine, and create a 2 x1 Gb trunk between the switches.<br>Second part of the problem might be that the ZFS file storage is not fast enough. (upgrade to SSD maybe)<br><br>Configuration:<br>Is it possible to configure mythtv in such a way that it does what I want, or are there better ways to do this? <br><br>Any help appreciated!<br><br>Johan<br>_______________________________________________<br>mythtv-users mailing list<br><a href="mailto:mythtv-users@mythtv.org">mythtv-users@mythtv.org</a><br>http://www.mythtv.org/mailman/listinfo/mythtv-users<br>http://wiki.mythtv.org/Mailing_List_etiquette<br>MythTV Forums: https://forum.mythtv.org<br></blockquote><br>_______________________________________________<br>mythtv-users mailing list<br><a href="mailto:mythtv-users@mythtv.org">mythtv-users@mythtv.org</a><br>http://www.mythtv.org/mailman/listinfo/mythtv-users<br>http://wiki.mythtv.org/Mailing_List_etiquette<br>MythTV Forums: https://forum.mythtv.org<br></blockquote></div><br></body></html>