[mythtv] [mythtv-commits] Ticket #1835: Gradually delete big files to avoid I/O starvation on some filesystems

Boleslaw Ciesielski bolek-mythtv at curl.com
Thu May 25 12:38:23 UTC 2006

Chris Pinkham wrote:
> * On Wed May 24, 2006 at 06:46:58PM -0400, Daniel Kristjansson wrote:
>> Lets just say every file needs to be fully truncated in 4 minutes and
>> DoHandleDeleteRecording() has to return immediately. This ensures
>> that all the files deleted by AutoExpire::SendDeleteMessages()
>> are actually deleted by the next time it runs.
> Sounds OK, but you still have to deal with the issue that if you delete
> 5 files at the same time (manually), then you need to know about the
> sizes of all 5 in order to delete them in the 4-minute window.  Unless
> you update the lastmodified timestamp as mentioned and even that won't
> fix the issue totally.  If you hit the deletelock mutex on files 2-5
> while file 1 is taking 4 minutes to delete, then you don't have a chance
> to update the lastmodified timestamp.  You could update it right after
> you get the deletelock lock, but for file #3, that is too late since it
> would have been 8 minutes since it the delete was triggered.  This is
> why I think it is a bit more complicated than you guys are thinking.
> If you say all files have to be truncated in the next 4 minutes, then
> you still need to know the total filesize so you can delete files #1-4
> at a fast enough speed to finish file #5 within that 4 minutes.

My current plan is to try to see if the auto-expire can be delayed until 
all pending deletes are finished. There is no reason to run auto-expire 
before then as long as we are deleting faster than we are recording.

BTW, is the deletelock mutex only used to serialize the deletes for 
performance reasons or is it also guarding some shared resources? 
Perhaps the recordings can be deleted from the database (and unlinked) 
before we grab the lock and start the truncate loop.


More information about the mythtv-dev mailing list