[mythtv] Backend process dies at 4GB file limit? - coding hints wanted.... PATCH ATTACHED TOO
Ian Caulfield
imc25 at cam.ac.uk
Fri Jan 20 11:49:33 UTC 2006
This has been a problem for me - while I use XFS and thus have no large
file woes, I tend to run close to full on my hard drives, and if myth runs
out of disk space, the backend basically dies...
Ian
On Fri, 20 Jan 2006, Buzz wrote:
> The problem is that as it exists now in CVS, ThreadedFileWriter.cpp has no
> "usual failure path" from the 'write' command (in safe_write). safe_write
> returns a uint to indicate how much was written, and '0' is a legitimate
> amount to write, not an error case. I've changed the relevant places to
> allow it to return negative (failure),and pass the failure back-up the
> calling chain to RingBuffer where it emits an error to the log.
>
> Both backend and frontend both still seem oblivious to the error condition
> that occurs when RingBuffer->Write() return -1 during a record.
>
> Other suggestions?
>
> Buzz.
>
>>>
>>> Am I doing the right thing here... Or is there an easier way?
>>>
>> Just ignore the signal - write will return EFBIG and the
>> recording should follow the usual failure path. You should
>> be able to test it with ulimit -f, that will allow you to
>> generate SIGXFSZ with smaller files.
More information about the mythtv-dev
mailing list