[mythtv] [mythtv-commits] Ticket #1835: Gradually delete big files to avoid I/O starvation on some filesystems
Boleslaw Ciesielski
bolek-mythtv at curl.com
Wed May 24 21:40:09 UTC 2006
Chris Pinkham wrote:
> I'm not sure about this. The gradual delete has to happen inside
> MainServer::DoDeleteThread() which is where he put it. The deadline for
> deleting is on a per-recording basis. A recording needs to be deleted within
> 5 minutes of when we were told to delete it, otherwise it will pop back up on
> the Watch Recordings screen. This logic is in
> MainServer::HandleQueryRecordings(). The 5-minute check could be put up to
> 10 minutes, but that is probably the limit. I would think that you would
> need to calculate the delete increment at the time that you can actually
> start the delete.
I have some ideas about this, let me throw them out and see what you think.
As far as I can tell there are two constraints on how slowly we can
delete the files.
1. We don't want to delete so slowly that the recording is put back on
the Watch Recordings screen. This can be solved by opening the file
first, then unlinking and then the gradual delete loop (using ftruncate
instead of truncate). In the loop we should update recorded.lastmodified
(maybe not on every iteration but you get the idea). As a bonus, if the
backend crashes the file will be deleted completely automatically by the
OS. If we do this, this constraint basically goes away.
2. We don't want to delete so slowly that some other recordings are
forced to autoexpire, even though they wouldn't be if we deleted the
file immediately. This can be solved by ensuring that we delete at least
as fast we record. Basically we should delete at the rate greater than
the max cumulative recording rate already computed by the backend.
However I am not sure if it's stored anywhere? Where can I get it from?
Bolek
More information about the mythtv-dev
mailing list