Beatmup
|
Beatmup is thought as a toolset for building efficient signal and image processing pipelines.
This page covers briefly few main concepts of a fairly simple programming model used in Beatmup: contexts, thread pools, tasks, jobs and bitmaps.
A Context instance mainly contains one or more thread pools that can execute processing actions (tasks). At least one Context instance (and quite often just one) is required to do anything in Beatmup.
A thread pool is a bunch of threads and a queue of tasks. Tasks are submitted by the application in a pool and are executed in order. A given thread pool can run only one task at a time, but it does so in multiple threads in parallel for speed.
Thread pools work asynchronously with respect to the caller code: the tasks can be submitted by the application in a non-blocking call, straight from a user interface managing thread for example. Context exposes necessary API entries to check whether a specific task is completed or still waiting in the queue, to cancel a submitted task, to check exceptions thrown during task execution, etc.
By default, when a thread pool is created, the number of threads it hosts is inferred from the hardware concurrency: typically, it is equal to the number of logical CPU cores. This setting is likely to provide the best performance for computationally intensive tasks. The number of threads in a pool can be further adjusted by calling Context::limitWorkerCount().
A task (instance of AbstractTask) is an isolated elementary processing operation. It can run in parallel in multiple threads for speed, on CPU and/or GPU.
The tasks are not intended to contain any user code. If you need a specific processing function to be implemented, much likely you need to subclass AbstractTask.
In short, an AbstractTask has three main phases:
A detailed description is available in AbstractTask documentation.
The tasks can throw exceptions. If this happens, the thread pool that is in charge of running the failing task stores the exception internally and rethows it back to the application code, when the latter calls Context::check() function.
It is recommended to call Context::check() in a timely manner to process exceptions produced by tasks.
When a task is submitted to a thread pool using Context::submitTask() function, it produces a job. A job is just a ticket number in the queue of the corresponding thread pool. Context functions take it to check the task status or cancel it. In this way, the same task can be submitted several times to the same thread pool, producing several different jobs, and will then be run several times.
If the asynchronous behavior is not needed, a task can be run in a blocking call to Context::performTask(). This hides the mechanics of jobs from the user and just runs a given task.
Usually, once a task is completed, it is dropped from the thread pool queue. This is referred to as the "normal mode", differently to the "persistent mode" in which the task is getting repeated until it decides to quit itself. This is convenient for rendering and playback tasks consuming signals from external sources, that are still run in a granular fashion (by frame or signal buffer) but persist until the data is fully consumed.
Context::submitPersistentTask() produces a persistent job for a specific task.
Since Beatmup is mainly oriented towards image processing, AbstractBitmap is another central class in Beatmup.
An AbstractBitmap is basically an image. From the application perspective it has two main implementations.
Beatmup is thought to be lightweight and dependency-free. For this reason it does not incorporate image decoding/encoding features: it cannot natively read and write JPEG or PNG files for example. This is not a problem when using Beatmup within an application where all the typical means of loading and storing images are accessible through the corresponding AbstractBitmap implementations. Also, for debugging purposes and minimal I/O capabilities Beatmup supports reading and writing BMP files.
Beatmup uses GPU to process images when possible. In order to mix efficiently CPU and GPU processing, Beatmup can store the same image in CPU memory, GPU memory or both. This naturally implies pixel transfer operations. Internally, Beatmup hides this from the user as much as possible and only performs the pixel data transfer when needed. However, when it comes to exchange the image data with the application code, the user typically needs to make sure the CPU version of the image (the one accessible with the platform-specific bitmaps outside of the Beatmup environment) is up-to-date with respect to the GPU version used by Beatmup.
When a bitmap is destroyed in the application code, its GPU storage is not destroyed immediately. This is due to the fact that destroying a texture representing the bitmap content in the GPU memory needs to be done in a thread that has access to the GPU, which is one of the threads in the thread pool. The textures of destroyed bitmaps are marked as unused anymore and put into a "GPU trash bin". The latter is emptied by calling GL::RecycleBin::emptyBin() function on a recycle bin object instance returned by Context::getGpuRecycleBin(). Note that the recycle bin instance is only allocated is the GPU is actually used within the given Context.
In applications doing repeated allocations and deallocations of images (e.g., processing video frames in a loop), it is recommended to empty the GPU recycle bin periodically in the described way in order to prevent running out of memory.
The thread pools make easy running several tasks one after the other. However, when the same pattern of tasks is needed to be run repeatedly, Beatmup offers a technique to put multiple tasks together into a single compound task, Beatmup::Multitask. This enables designing complex application-specific processing pipelines.
A multitask is a pipeline of tasks processing some data in a multi-stage fashion. It can simply host multiple tasks and run them in order, without explicitly submitting them into a thread pool. It also implements a set of repetition policies allowing to skip some stages at the beginning of the pipeline, if no changes is made to the input data and parameters with respect to the previous run, for example.