[mythtv-users] Raid Performance Tweaking
Blammo
blammo.doh at gmail.com
Mon Jul 10 21:14:27 UTC 2006
On 7/10/06, Jens Axboe <mythtv-users at kernel.dk> wrote:
> On Mon, Jul 10 2006, Blammo wrote:
> > 3. IO Scheduler: (Lots of good info here :
> > http://www.wlug.org.nz/LinuxIoScheduler) Changes the way in which the
> > OS doles out IO performance. I've found for me, the best performance
> > from the "deadline" scheduler, which gives even timeslices to each
> > thread, which seems to avoid starving any one thread.
>
> Sorry to have to correct you again, but that's not the promise that
> deadline gives (even timeslices to each thread). deadline has no concept
> of a process, it only knows about pending IO requests. The deadline
> aspect of the scheduler is that it "expires" a request when a certain
> deadline as been reached. When that happens, it abandons it's current
> position in the sorted list of pending work and jumps to the expired
> request. It then continues serving requests from that point on, until
> (if) another deadline is reached. To avoid seek storms, it will continue
> doing X number of requests from an expired location even if another
> expired request shows up right after the current one completes.
>
> So deadline tries to minimize latency, at the cost of throughput. That's
> why you'll see a case of eg 4 apps reading files at different locations
> on the disk get 1-2MiB/sec performance each (depends mainly on the io
> size issued) with ~100ms worst case latency each, where other schedulers
> (CFQ) would give you 90% of the disk bandwidth at 300ms worst case
> latency (slice_time * (processes - 1)).
>
> deadline is simple and if you have fast enough drives, then it's
> performance will be good enough. If you end up being too seek bound, CFQ
> will do a lot better. This, of course, is mainly an issue if you have
> more than 2-3 "clients" running from the same backend. Equal or lower
> number of clients will perform fine with either scheduler.
>
> > 4. Performance testing: In the beginning people use HDParm, which is a
> > good way to get started, but is very inconsistant. You then graduate
> > to bonnie++ which is a great drive benchmarking tool. When I'm
> > benchmarking the drive, I usually do the first pass in single-user
> > mode to avoid any other process contention, record that, then
> > benchmark the rest in multi-user to get a baseline.
>
> One should note that when doing performance testing, you should try and
> mimic the behaviour you are likely to see when the system has been put
> into production. bonnie++ is fine for some things, however it doesn't
> really mimic a myth-like setup.
>
> I wrote an io tester tool called fio for help with these sorts of
> things. You can imitate a system with X threads reading at Y bandwidth
> for instance, with a few writers tossed in (throttled or not), or
> whatever you please. It should be quite doable to write a synthetic myth
> like load with it. You can find the latest snapshot here:
>
> http://brick.kernel.dk/snaps/
>
> if you are interested, at the time of this email that would be
> http://brick.kernel.dk/snaps/fio-git-20060616102504.tar.gz.
>
> --
> Jens Axboe
>
> _______________________________________________
> mythtv-users mailing list
> mythtv-users at mythtv.org
> http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users
>
More information about the mythtv-users
mailing list