[mythtv] [mythtv-commits] Ticket #1835: Gradually delete big files to avoid I/O starvation on some filesystems

Chris Pinkham cpinkham at bc2va.org
Thu May 25 04:49:23 UTC 2006


* On Wed May 24, 2006 at 05:40:09PM -0400, Boleslaw Ciesielski wrote:
> Chris Pinkham wrote:
> 1. We don't want to delete so slowly that the recording is put back on 
> the Watch Recordings screen. This can be solved by opening the file 
> first, then unlinking and then the gradual delete loop (using ftruncate 
> instead of truncate). In the loop we should update recorded.lastmodified 
> (maybe not on every iteration but you get the idea). As a bonus, if the 
> backend crashes the file will be deleted completely automatically by the 
> OS. If we do this, this constraint basically goes away.

I like this, so you'll open the file before the existing unlink code,
then at the bottom of the deletethread method after we delete from the
database you'll call your gradualdelete method to truncate the opened
(but previously deleted) file.

> 2. We don't want to delete so slowly that some other recordings are 
> forced to autoexpire, even though they wouldn't be if we deleted the 
> file immediately. This can be solved by ensuring that we delete at least 
> as fast we record. Basically we should delete at the rate greater than 
> the max cumulative recording rate already computed by the backend. 
> However I am not sure if it's stored anywhere? Where can I get it from?

I think it is just in the AutoExpirer, but there's no reason that logic
couldn't be moved out if it makes since and is usable in more places
than AutoExpire.

--
Chris


More information about the mythtv-dev mailing list