GGI Project Documentation | ||
---|---|---|
Prev |
LibVideo is intended to DIRECTLY reflect the video specification, except the sound part. Sound support is done by callback functions, which have to be implemented in the applications or another library. This way, GGI forces noone to use any certain sound API.
At the beginning, the user creates a video-resource. Internally, libvideo asks libovl for such a resource you specified. If libovl says, yes it's available, then all operations, which are responsible for _displaying_ the video will performed by it. If libovl says no, then libvideo falls back to use its target for software emulation. The target tries to find out, _what_ let libovl fail and tries to emulate this. If it turns out, that libovl just failed, because of lack of hw-decompression support, then the target does this in software and libovl is used for performing the rest. If libovl fails again, then libblt is asked for a bob to emulate the video-resource. Finally, libvideo creates a raw-buffer using libbuf, where the displayed video-data will be stored.
libvideo uses libgpf to load/save video-data from/to the raw-buffer. libgpf is also used to rewind, backward, forward, start, stop the video-datastream. If the first three functions will actually work, depends on the data-source. Example: If you're doing video conferencing, you have an endless data stream, which doesn't allow you to forward/backward the data as a cassette. But you can always start and stop the data-transfer.
That is the way it works, when libovl provides full acceleration. Graphic cards has an special overlay buffer for this. In libovl you'll find a ggiOvl2Buf() function, which let's you access it using libbuf.
libvideo can even handle this scenery efficiently: You have an old graphic card without video support, but you have a mpeg-decoder card in your machine. In this case, libovl is used through libvideos target to decode video data and store it in a buffer. Then libblt is used to do software video play.
libgpf's output target is the one, which determines, if a data-conversion or (de)compression in any form is needed. Some io-targets also allow to influence/determine its behaviour by setting flags. So I go into more detail how this works:
libvideo provides a function to load a video (i.e. a MPEG). This one tells libgpf to open a MGPEG video as input and libbuf as output. The libbuf target uses the buffer-address given as parameter (Have a look into libgpf/include/gpf.h - you can find libgpf in the ggi-libs CVS module). libvideo gets the buffer-address from ggiOvl2Buf(), which is a raw-buffer. So you can also tell libgpf's libuf-target (by setting flags) to direct the data using a DMA-transfer. The DMA-transfer itself is performed by libbuf, not by libgpf. libgpf provides protocols (i.e. target, file, http, ftp, tcp, udp, etc. to do that). This allows you to do video conferencing directly from the internet into a special overlay buffer, where it is decompressed and displayed in hardware. The target protocol is intended to redirect the output (or input) to libraries like libggi, libbuf, libblt, etc. What libgpf does is performing blitting operations as an endless or non-endless data stream, not as a batchop as libblt does.