[mythtv-commits] mythtv commits
mythtv at cvs.mythtv.org
mythtv at cvs.mythtv.org
Thu Sep 23 22:40:06 EDT 2004
Changes committed by cpinkham on Fri Sep 24 02:37:50 2004
NuppelVideoPlayer.cpp NuppelVideoPlayer.h commercial_skip.cpp
dbcheck.cpp libmythtv.pro programinfo.cpp programinfo.h
scheduledrecording.cpp scheduledrecording.h sr_items.cpp
sr_items.h sr_root.cpp sr_root.h tv_rec.cpp
housekeeper.cpp main.cpp mainserver.cpp mainserver.h
globalsettings.cpp playbackbox.cpp statusbox.cpp statusbox.h
Better go get your reading glasses on for this long commit log.... It's
long but I wanted to describe a little about the added JobQueue features
so people know how things are working.
* Create a JobQueue for processing things like Commercial Flagging, etc..
The only built-in job currently is Commercial Flagging, but there are
plans to convert the transcoder to use this queue as well. There are
also config settings for up to 4 "User Jobs" which can be setup to run
after a recording finishes. These can be any script or program the user
wants to run such as an nuvexport, the 'archive' script mentioned below,
or any other job the user wants to run.
Jobs are queued by insertting a recording into the jobqueue table. Each
backend runs a JobQueue thread looking for jobs to process. A job can
be setup to run on any available server by leaving the jobqueue.hostname
column empty or the job can be forced to run on a specific backend by
putting that backend hostname in the hostname column.
User Jobs are configurable via the Myth setup program (not mythfrontend's
setup). The Job scripts are global settings so they can be changed from
any backend. The Myth backend setup program also allows the user to
specify which jobs can run on that particular backend, so one backend
could be setup to run only commercial flagging, and User Job #2, while
another backend could be setup to only run User Jobs #1 and #2.
Jobs can be aborted or paused by setting a flag in the jobqueue table.
I plan on adding a menu to the job status screen to allow the user to
pause/resume/cancel a job.
In addition to the status field in the jobqueue table, there is also
a "comment" field which jobs can update. The commercial flagger updates
this field with the current completion percentage.
Jobs for a recording are processed sequentially in the order they were
inserted into the table. Currently the commercial flagging job is
inserted first when a recording finishes, then the 4 user jobs if
applicable. If you want to run script A before script B, then script A
must be in a lower User Job # than script B.
A sample archive script is included in the contrib directory as
For user jobs, the following substitutions apply on the user job
%FILE% - filename of recording
%DIR% - Myth Recording directory where file is located
%HOSTNAME% - hostname of backed where recording was made
The rest are pretty self-explanatory:
%TITLE% %SUBTITLE% %DESCRIPTION% %CATEGORY%
%RECGROUP% %CHANID% %STARTTIME% %ENDTIME%
Starttime and Endtime are in YYYYMMDDHHMMSS format. All text fields
are substituted unquoted, so if you need to quote something you have to
do it yourself in the command.
* Eliminate old "CommercialSkipHost" dedicated commercial flagging host
setting in favor of new method of specifying which jobs can run on
which backends. This allows a user to have one or more commercial
flagging hosts so the old setting isn't needed anymore. Users who
were using the Dedicated Commercial Flagging Host setting will need
to setup the new Job Queue "Allow Jobs" for hosts in order to turn
on/off commercial flagging on specific backends.
* Add ability to set a font to use for items in the listarea on the status
screen. Currently only use is to highlight aborted and errored jobs
in Job Queue.
* Add scrolling highlight bar to themes/default/status-ui.xml for the
status screen listarea.
* Add Job Queue screen to the status page in mythfrontend. Jobs are shown
on the status screen if they are either: uncompleted, JOB_ERRORED status,
or recently finished (within the past 2 hours). Jobs with an error status
will show up in RED. The titles and start time/date are shown. When
scrolling through the list, details of the job will be displayed in the
help area at the top of the status screen.
* Changed "CommercialSkipCPU" setting to "JobQueueCPU" since it now applies
to jobs in the queue if they support checking this value to regulate
their cpu. User jobs will be run at nice(19) on "Low", nice(10) on
"Medium", and full throttle on "High". The way Commercial Flagging uses
this value is unchanged. This setting is now on the backend setup screen
also since it is configurable per-backend.
* Add Job Queue info to mythbackend status webpage. The job list here
fits the same criteria as the mythfrontend status screen.
* Added ProgramInfo::GetPlaybackURL() method which takes an optional
hostname as an argument. The hostname argument is the hostname of the
host doing the playback. If it is blank, then it is assumed playback
is local. This method returns either 1) an absolute filename if the
desired recording is accessible via direct file access, or 2) a
myth:// formatted URL for remote playback. Currently this code is
used by the commercial flagger in the Job Queue. The master backend
could probably be converted over to use this when generating a list of
recordings to give to the frontend, this way the logic would remain
in one place rather than in several places in the source code.
* Made it so mythcommflag can run on machines that don't have access to
the backend's recording directory via nfs. It will now open the
proper filename/url through the ringbuffer so it will be streamed from
the backend if the file is not available locally (or via NFS). This
also uses ProgramInfo::GetPlaybackURL().
* Add code to check for 5 & 10 second commercials to try to catch some
of those spots for things like local news and SciFi blurbs.
* Rework the program deletion code in MainServer to serialize the file
deletion and recordedmarkup deletion in one thread instead of spawning
off a thread for each and having them compete for cpu time if the file
is stored on the same machine as the database server. This seems to
speed up responsiveness on the frontend during file deletes quite a
More information about the mythtv-commits