Saturday, October 31, 2009

gavl color channels

When programming something completely unrelated, I stumbled across a missing feature in gavl: Extract single color channels from a video frame into a grayscale frame. The inverse operation is to insert a color channel from a grayscale frame into a video frame (overwriting the contents of that channel). This allows you to assemble an image from separate color planes.

Both were implemented with a minimalistic API (1 enum and 3 functions), which works for all 35 pixelformats. First of all we have an enum for the channel definitions:
typedef enum
{
GAVL_CCH_RED, // Red
GAVL_CCH_GREEN, // Green
GAVL_CCH_BLUE, // Blue
GAVL_CCH_Y, // Luminance (also grayscale)
GAVL_CCH_CB, // Chrominance blue (aka U)
GAVL_CCH_CR, // Chrominance red (aka V)
GAVL_CCH_ALPHA, // Transparency (or, to be more precise, opacity)
} gavl_color_channel_t;

For getting the exact grayscale format for one color channel you first call:
int gavl_get_color_channel_format(const gavl_video_format_t * frame_format,
gavl_video_format_t * channel_format,
gavl_color_channel_t ch);
It returns 1 on success or 0 if the format doesn't have the requested channel. After you have the channel format, extracting and inserting is done with:
int gavl_video_frame_extract_channel(const gavl_video_format_t * format,
gavl_color_channel_t ch,
const gavl_video_frame_t * src,
gavl_video_frame_t * dst);

int gavl_video_frame_insert_channel(const gavl_video_format_t * format,
gavl_color_channel_t ch,
const gavl_video_frame_t * src,
gavl_video_frame_t * dst);
In the gmerlin tree, there are test programs exctractchannel and insertchannel which test the functions for all possible combinations of pixelformats and channels. They are in the gmerlin tree and not in gavl because we load and save the test images with gmerlin plugins.

Saturday, October 24, 2009

Major gmerlin player upgrade

While working on several optimization tasks of the player engine, I found out that the player architecture sucked. So I made a major upgrade (well, a downgrade actually since lots of code was kicked out). Let me elaborate what exactly was changed.

Below you see a block schematics of the player engine as it was before (subtitle handling is omitted for simplicity):
The audio- and video frames were read by the input thread from the file, pulled through the filter chains (see here) and pushed into the fifos.

The output threads for audio and video pulled the frames from the fifos and sent them to the soundcard or the display window.

The idea behind the separate input thread was that if CPU load is high and decoding a frame takes longer than it should, the output threads can still continue with the frames buffered in the fifos. It turned out that this was the only advantage of this approach, and it only worked if the average decoding time was still less than realtime.

The major disadvantage is, that if you have fifos with frames pushed at the input and pulled at the output, the system becomes very prone to deadlocks. If fact, the code for the fifos became bloated and messy over the time.

While programming a nice new feature (updating the video display while the seek slider is moved), the playback was messed up after seeking and I quickly blamed the fifos for this. The resulted was a big cleanup, the result is shown below:
You see, that the input thread and fifos are completely removed. Instead, the input plugin is protected by a simple mutex and the output threads do the decoding and processing themselves. The advantages are obvious:
  • Much less memory usage (one video frame instead of up to 8 frames in the fifo)
  • Deadlock conditions are much less likely (if not impossible)
  • Much simpler design, bugs are easier to fix
The only disadvantage is that if a file cannot be decoded in realtime, audio and video run out of sync. In the old design the input thread took care that the decoding of the streams didn't run too much out of sync. For these cases, I need to implement frame skipping. This can be done in several steps:
  • If the stream has B-frames, skip them
  • If the stream has P-frames, skip them (display only I-frames)
  • Display only every nth I-frame with increasing n
Frame skipping is the next major thing to do. But with the new architecture it will be much simpler to implement than with the old one.