[mythtv] Backend process dies at 4GB file limit? - coding hints wanted.... PATCH ATTACHED TOO
Buzz
buzz at oska.com
Fri Jan 20 06:53:58 UTC 2006
The problem is that as it exists now in CVS, ThreadedFileWriter.cpp has no
"usual failure path" from the 'write' command (in safe_write). safe_write
returns a uint to indicate how much was written, and '0' is a legitimate
amount to write, not an error case. I've changed the relevant places to
allow it to return negative (failure),and pass the failure back-up the
calling chain to RingBuffer where it emits an error to the log.
Both backend and frontend both still seem oblivious to the error condition
that occurs when RingBuffer->Write() return -1 during a record.
Other suggestions?
Buzz.
> >
> > Am I doing the right thing here... Or is there an easier way?
> >
> Just ignore the signal - write will return EFBIG and the
> recording should follow the usual failure path. You should
> be able to test it with ulimit -f, that will allow you to
> generate SIGXFSZ with smaller files.
>
> _______________________________________________
> mythtv-dev mailing list
> mythtv-dev at mythtv.org
> http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-dev
>
More information about the mythtv-dev
mailing list