[mythtv-users] OT: which NAS-appliance can you recommand

Yeechang Lee ylee at pobox.com
Sat Apr 7 21:10:17 UTC 2007


Michael T. Dean <mtdean at thirdcontact.com> wrote a few days ago:
> What I wondered is if you mean it can do all of the above at once or
> any one of the above at once.  I.e. does it handle three HD
> recording stream while doing 2 HD playback streams and doing four
> (transcoding|realtime commflagging) jobs?

Yes. I can do all of the above with either my Clovertown slave-backend
RAID 6 array or the Pentium 4 frontend/primary-backend RAID 5 array,
with smooth HD playbacks.

Important disclaimers:

* I've only briefly tested having two simultaneous HD playback
  streams.
* The Clovertown server runs the actual transcoding/realtime
  commflagging jobs (at "High"), no matter where the recordings are
  stored.
* I use, and recommend, the aforementioned patches and tweaks to
  0.20-fixes.
* I have 2GB of RAM on the Pentium 4 frontend/backend. (Also on the
  Clovertown, but in four months of use I have yet to see swap kick in
  on it, while the swap on the frontend/backend is usually used to
  some degree. That's what a 500MB mythconverg database gets you.)
* I have the frontend/backend's mythconverg database and MySQL
  logfiles stored on the RAID 5 array, not on the non-RAID local SATA
  boot drive the Fedora Core 6 and MythTV application files are on.
* Even with 2GB RAM on the frontend/backend, a 500MB mythconverg
  database almost inevitably leads to swapping. This generally isn't a
  problem, except when a) there's a lot (like 900MB+) of swap being
  used and b) some system-level background job is occurring; then I
  might see sluggish playbacks and/or IOBOUND errors and/or 'DevRdB(0)
  Error: Driver buffers overflowed' errors in mythbackend.log.

  If these errors occur (and I'd say they happen no more often than
  once or twice every couple of weeks) I typically see this between
  midnight and 12:30am, but I haven't yet figured out what job or jobs
  could be running. I've already checked the usual suspects of
  /etc/crontab and user-account crontabs.

That said:

* I have no reason to think that I could not raise the stakes to, say,
  four, five, or six transcoding/realtime commflagging jobs at "High,"
  and/or even-more simultaneous jobs at lower loads; I've just never
  needed to see. However, if I ever bother to set up my second HD5000
  ATSC card I'll find this out myself.

> If so, I'm very interested in what throughput you're getting on your
> (apparently gigabit) network (i.e. from an scp /sftp or something).

Actually, surprisingly little; I've done very, very little work in
terms of optimizing the (gigabit) network performance. Over CIFS,
here's what I get when writing from the frontend/backend to the RAID 6
array:

    rm -rf 20gb; time dd if=/dev/zero of=20gb bs=1024k\ count=20000;
    rm -rf 20gb # My write test

    20971520000 bytes (21 GB) copied, 913.41 seconds, 23.0 MB/s

Pretty crummy, eh? But nonetheless, it's a) twice what I typically got
from the Infrant NAS and b) enough to handle my needs.

If I run the same test from the frontend/backend on the local RAID 5
array, I get

> Did you know you don't have to run mythbackend to do this?  Just
> right the (comparatively lightweight) mythjobqueue daemon (making it
> a "job queue server" rather than a slave backend), instead.

Three reasons I don't use mythjobqueue:

* In the past I've found that if a job (usually mythtranscode) dies,
  it'll take mythjobqueue with it. mythbackend is pretty good about
  staying up and simply respawning the job.

* I intend to put a tuner on the slave backend at some point.

* mythjobqueue can't reset itself when hit with a SIGHUP while writing
  to a lotfile the way mythbackend can.

-- 
Yeechang Lee <ylee at pobox.com> | +1 650 776 7763 | San Francisco CA US


More information about the mythtv-users mailing list