[mythtv] New Music Visualization : Need to process signal between frames
mythtv-dev at platformedia.com
Fri Dec 16 06:24:07 UTC 2011
I'm building a 'Piano' spectrum visualization, where the pitches detected
directly correspond to piano key pitches.
At the moment, the standard fps=20 means that I'm only seeing a fraction of
the data in my .process(), since (at the moment) the
discarding 'between the frame' nodes of audio data, choosing to only call
.process() when it wants to .draw() the visualization.
However, for low notes on the piano, I need longer periods of audio data to
get good recognition, so I'd like to run some kind of .process() even for
non-displayed chunks of audio.
Since this feature doesn't seem to be present, what is the MythTV preferred
way to implement this :
(a) Add an additional parameter to .process(node) : i.e. .process(node,
will_be_displayed=true). But the downside of this is that other
visualizations will have an extra 'ignore this one' choice to make; OR
(b) Add an additional virtual function to
i.e. .process_between_displayed_frames(node), which no-one else will be
hooking into, but I'll be able to use to analyze the whole audio stream.
Downside is VisualBase<http://code.mythtv.org/doxygen/classVisualBase.html>
a little hairier; OR
(c) Some method I haven't thought of.
Each will need a little something added into the loop of
which is discarding node data.
An alternative (funky) workaround is to set fps high enough that I catch
all the data. But that's pretty inefficient, since there's no need to
refresh the screen at ~90fps (44100/512 looks like it would be chunk
Could someone with an opinion please weigh in? I'll submit a patch vs SVN
once it's looking classy...
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the mythtv-dev