[mythtv] Solving my performance problems...

avalanche at beyondmonkey.com avalanche at beyondmonkey.com
Sun Dec 14 16:52:46 EST 2003


----- Original Message ----- 
From: "Mark Frey" <markfrey at fastmail.fm>
To: <mythtv-dev at mythtv.org>
Sent: Sunday, December 14, 2003 6:57 AM
Subject: [mythtv] Solving my performance problems...


> 
> I've been trying to track down performance problems with my machine and
> MythTV. After spending some time instrumenting the code , and figuring out
> what RingBuffer, RemoteFile, etc. are doing, I determined the following:
> 
> 1. I'm not CPU limited (<50% cpu used)
> 2. I'm not HD bandwidth limited (<30% bandwidth used)
> 
> After timing some things I found that the bandwidth at the raw read(...)
> level is very high (straight from the drive, hitting the cache most of the
> time), the effective bandwidth at the RingBuffer::safe_read(RemoteFile *...)
> call is very low, ~1.5 to 2.0 MB/s (the drive transfer rate is ~30MB/s
> reads). It seems like the Qt event loop processing etc. is the bottleneck. I
> know this is no big suprise, I read a post from Isaac in July where he said
> as much. What I have been trying to do is figure out if there is some way to
> improve this. I could go get a new hard drive, use a kernel with both the
> low latency and preempt patches and possibly get things running acceptably
> for my current uses, but I wonder whether any amount of hardware would
> provide enough performance for HDTV given the current overhead of the whole
> socket-request-response stuff.
> 
> The obvious way to minimize the event loop overhead is to ask for more in a
> single request, amortize the overhead cost over more bytes. The problem is,
> that because of the socket buffer size, requests are currently limited to
> 64000 byte blocks. I've been hacking away trying to raise this limit to
> determine how much performance I can gain. I've managed to test 128000 byte
> blocks, and it looks like the bandwidth at the safe_read level scales by
> block size (to some limit obviously). With 128000 byte blocks my bandwidth
> increases to ~4MB/s. The problem is things aren't stable with the changes I
> made (I essentially request the larger size on the frontend side, then pull
> from the socket when the data becomes available, while on the backend
> pushing bytes in buffer-sized chunks and waiting for the buffer to clear,
> until I've sent the total requested). I believe I understand some of the
> causes of the instability, but before I spend the time to figure it out I
> wanted to make sure I'm not wasting my time.


I just tried out a quick hack to test something similar. I'm requesting a 
big packet (256k) and splitting it up in 64k packets for transmission, 
similar to what you are doing. I'm grabbing the underlying QSocketDevice 
from the backend's QSocket, that way the trasmission of the smaller packets 
should be very fast, close to network limits. Right now it only works up
to 265k blocks (limited somewhere else) but even bigger blocks shouldn't 
be a problem if necessary. This patch is just a very quick hack but it
seems to work, should make it possible for you to do some benchmarks. I 
only tested this with livetv, not sure if recordings on a remote 
frontend use the same code path.

av

-------------- next part --------------
A non-text attachment was scrubbed...
Name: bigblocks#1.diff
Type: application/octet-stream
Size: 6296 bytes
Desc: not available
Url : http://mythtv.org/pipermail/mythtv-dev/attachments/20031214/6b4aecd8/bigblocks1.obj


More information about the mythtv-dev mailing list