API reference

This is documentation of Beatmup Python API.

You can get more information from C++ core documentation available here.

Java documentation for android is available here.

beatmup_keras module

Contains activation functions and Shuffle operation implementation supported in Beatmup neural network inference backend. Implements keras models conversion utility into Beatmup models.

exception beatmup_keras.CannotExport

Exception thrown when a given model cannot be exported in a form of a Beatmup model

class beatmup_keras.Shuffle(*args, **kwargs)

Shuffling layer implementation.

This layer changes the order of feature maps in the tensor in a special way, which allows for an efficient inference implementation in the Beatmup engine: no memory copy is done, only texture sampling order changes, which has a minimum-to-no impact on the inference speed. The shuffle is done by contiguous blocks of 4 feature maps. For shuffling step equal to n, the input channels are put in the following order on output:

0, 1, 2, 3, 4n, 4n+1, 4n+2, 4n+3, 8n, 8n+1, 8n+2, 8n+3, …, 4, 5, 6, 7, 4n+4, 4n+5, 4n+6, 4n+7, 8n+4, …

For shuffling step equal to 1 the operation is identity. In order to have all the input channels present, the shuffling step times 4 must divide the number of input channels.

call(tensor)

This is where the layer’s logic lives.

The call() method may not create state (except in its first invocation, wrapping the creation of variables or other resources in tf.init_scope()). It is recommended to create state, including tf.Variable instances and nested Layer instances,

in __init__(), or in the build() method that is

called automatically before call() executes for the first time.

Args:
inputs: Input tensor, or dict/list/tuple of input tensors.

The first positional inputs argument is subject to special rules: - inputs must be explicitly passed. A layer cannot have zero

arguments, and inputs cannot be provided via the default value of a keyword argument.

  • NumPy array or Python scalar values in inputs get cast as tensors.

  • Keras mask metadata is only collected from inputs.

  • Layers are built (build(input_shape) method) using shape info from inputs only.

  • input_spec compatibility is only checked against inputs.

  • Mixed precision input casting is only applied to inputs. If a layer has tensor arguments in *args or **kwargs, their casting behavior in mixed precision should be handled manually.

  • The SavedModel input specification is generated using inputs only.

  • Integration with various ecosystem packages like TFMOT, TFLite, TF.js, etc is only supported for inputs and not for tensors in positional and keyword arguments.

*args: Additional positional arguments. May contain tensors, although

this is not recommended, for the reasons above.

**kwargs: Additional keyword arguments. May contain tensors, although

this is not recommended, for the reasons above. The following optional keyword arguments are reserved: - training: Boolean scalar tensor of Python boolean indicating

whether the call is meant for training or inference.

  • mask: Boolean input mask. If the layer’s call() method takes a mask argument, its default value will be set to the mask generated for inputs by the previous layer (if input did come from a layer that generated a corresponding mask, i.e. if it came from a Keras layer with masking support).

Returns:

A tensor or list/tuple of tensors.

get_config()

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Returns:

Python dictionary.

beatmup_keras.brelu1(x)

Activation function: ReLU clipped into [0, 1] range. Corresponds to DEFAULT activation function.

beatmup_keras.brelu6(x)

Activation function: 1/6 * ReLU clipped into [0, 1] range. Corresponds to BRELU6 activation function. This is the activation function used in MobileNet v1 and v2 architectures with the output stretched to 0..1 range. This stretching is added to cope with the backend constraints, namely to be able to store the activation values into integer-valued textures on low-end devicees.

beatmup_keras.export_model(model, context, model_data=None, prefix='')

Converts a keras model to a Beatmup model. Scans a given keras model trying to convert its layers into Beatmup operations. The converted operations are connected into a Beatmup model.

Supports Conv2D, Dense, 2D average and max pooling and few other keras layers with some limitations. Batch normalization is supported after Conv2D. An activation function is required after Conv2D/BatchNormalization. ReLU with a given max_value is supported, as well as Beatmup activation functions brelu1, brelu6 and sigmoid_like. A residual connection added before the activation function application is supported.

Parameters:
  • model – The input keras model

  • context – A Beatmup context

  • model_data – A beatmup.WritableChunkCollection that will store the model data. If None, a new collection is created.

  • prefix – A string prefix to add to the operation names.

Returns the converted model (instance of beatmup.nnets.Model) and its corresponding data (model_data extended with the operations data, if provided, else a new collection). Raises exceptions if cannot export.

beatmup_keras.sigmoid_like(x)

Activation function approximating sigmoid in a piecewise-linear fashion. Corresponds to SIGMOID_LIKE activation function.

beatmup module

AbstractBitmap

Abstract bitmap class

AbstractTask

Abstract task executable in a thread pool of a Context

AffineMapping

2x3 affine mapping containing a 2x2 matrix and a 2D point

Bitmap

A bitmap wrapping a numpy container without copying

BitmapResampler

Resamples an image to a given resolution.

ChunkCollection

A key-value pair set storing pieces of arbitrary data (chunks) under string keys.

ChunkFile

File containing chunks.

Context

Beatmup engine context

CustomPipeline

Custom pipeline: a sequence of tasks to be executed as a whole.

FloodFill

Discovers areas of similar colors up to a tolerance threshold around given positions (seeds) in the input image.

ImageShader

A GLSL program to process images

IntegerContour2D

A sequence of integer-valued 2D points

InternalBitmap

Bitmap whose memory is managed by the Beatmup engine.

Metric

Measures the difference between two bitmaps

Multitask

Conditional multiple tasks execution.

PixelFormat

Specifies bitmap pixel format

Scene

An ordered set of layers representing renderable content

SceneRenderer

AbstractTask rendering a Scene.

ShaderApplicator

A task applying an image shader to bitmaps

WritableChunkCollection

Writable ChunkCollection implementation for Python.

class beatmup.AbstractBitmap

Abstract bitmap class

get_context(self: beatmup.AbstractBitmap) beatmup.Context

Returns Context the current bitmap is attached to

get_memory_size(self: beatmup.AbstractBitmap) int

Returns bitmap size in bytes

get_pixel_format(self: beatmup.AbstractBitmap) beatmup.PixelFormat

Returns pixel format of the bitmap

save_bmp(self: beatmup.AbstractBitmap, filename: str) None

Saves a bitmap to a BMP file

zero(self: beatmup.AbstractBitmap) None

Sets all the pixels to zero

class beatmup.AbstractTask

Abstract task executable in a thread pool of a Context

class beatmup.AffineMapping

2x3 affine mapping containing a 2x2 matrix and a 2D point

get_inverse(*args, **kwargs)

Overloaded function.

  1. get_inverse(self: beatmup.AffineMapping) -> beatmup.AffineMapping

Returns inverse mapping

  1. get_inverse(self: beatmup.AffineMapping, point: tuple) -> tuple

Computes inverse mapping of a point

get_matrix(self: beatmup.AffineMapping) tuple

Returns the mapping matrix

get_position(self: beatmup.AffineMapping) tuple

Returns the mapping origin

invert(self: beatmup.AffineMapping) None

Inverts the mapping

is_point_inside(self: beatmup.AffineMapping, point: tuple) bool

Tests whether a point from the output domain is inside the input axes span

rotate_degrees(self: beatmup.AffineMapping, angle: float, fixed_point: tuple = (0.0, 0.0)) None

Rotates the mapping around a given point in target domain

scale(self: beatmup.AffineMapping, factor: float, fixed_point: tuple = (0.0, 0.0)) None

Scales the mapping around a given point in target domain

set_center_position(self: beatmup.AffineMapping, point: tuple) None

Adjusts the mapping origin so that the center of the axes box matches a given point

translate(self: beatmup.AffineMapping, shift: tuple) None

Translates the mapping

class beatmup.Bitmap

A bitmap wrapping a numpy container without copying

class beatmup.BitmapResampler

Resamples an image to a given resolution. Implements different resampling approaches, including standard ones (bilinear, bicubic, etc.) and a neural network-based 2x upsampling approach dubbed as “x2”.

class Mode

Resampling mode (algorithm) specification

Members:

NEAREST_NEIGHBOR : zero-order: usual nearest neighbor

BOX : ‘0.5-order’: anti-aliasing box filter; identical to nearest neighbor when upsampling

LINEAR : first order: bilinear interpolation

CUBIC : third order: bicubic interpolation

CONVNET : upsampling x2 using a convolutional neural network

property name
property cubic_parameter

Cubic resampling parameter (alpha)

property input

Input bitmap

property input_rectangle

Specifies a rectangular working area in the input bitmap. Pixels outside of this area are not used.

property mode

Resmpling algorithm (mode)

property output

Output bitmap

property output_rectangle

Specifies a rectangular working area in the output bitmap. Pixels outside of this area are not affected.

class beatmup.ChunkCollection

A key-value pair set storing pieces of arbitrary data (chunks) under string keys. A chunk is a header and a piece of data packed in memory like this: (idLength[4], id[idLength], size[sizeof(chunksize_t)], data[size]) ChunkCollection defines an interface to retrieve chunks by their ids.

chunk_exists(self: beatmup.ChunkCollection, id: str) bool

Check if a specific chunk exists.

Parameters:

id – the chunk id

Returns True if only the chunk exists in the collection.

chunk_size(self: beatmup.ChunkCollection, id: str) int

Retrieves size of a specific chunk.

Parameters:

id – the chunk id

Return size of the chunk in bytes, 0 if not found.

close(self: beatmup.ChunkCollection) None

Closes the collection after a reading session.

open(self: beatmup.ChunkCollection) None

Opens the collection to read chunks from it.

size(self: beatmup.ChunkCollection) int

Returns the number of chunks available in the collection after it is opened.

class beatmup.ChunkFile

File containing chunks. The file is not loaded in memory, but is scanned when first opened to collect the information about available chunks.

class beatmup.Context

Beatmup engine context

abort_job(self: beatmup.Context, job: int, pool: int = 0) bool

Aborts a given submitted job.

busy(self: beatmup.Context, pool: int = 0) bool

Returns True if a specific thread pool in the context is executing a Task

check(self: beatmup.Context, pool: int = 0) None

Checks if a specific thread pool is doing great: rethrows exceptions occurred during tasks execution, if any.

empty_gpu_recycle_bin(self: beatmup.Context) None

Empties GPU recycle bin. When a bitmap is destroyed in the application code, its GPU storage is not destroyed immediately. This is due to the fact that destroying a texture representing the bitmap content in the GPU memory needs to be done in a thread that has access to the GPU, which is one of the threads in the thread pool. The textures of destroyed bitmaps are marked as unused anymore and put into a “GPU trash bin”. The latter is emptied by calling this function. In applications doing repeated allocations and deallocations of images (e.g., processing video frames in a loop), it is recommended to empty the GPU recycle bin periodically in the described way in order to prevent running out of memory.

is_gpu_queried(self: beatmup.Context) bool

Returns True if GPU was queried and ready to use

is_gpu_ready(self: beatmup.Context) bool

Returns True if GPU was queried and ready to use

limit_worker_count(self: beatmup.Context, max_value: int, pool: int = 0) None

Limits maximum number of threads (workers) when performing tasks in a given pool

max_allowed_worker_count(self: beatmup.Context, pool: int = 0) int

Returns maximum number of working threads per task in a given thread pool

perform_task(self: beatmup.Context, task: beatmup.AbstractTask, pool: int = 0) float

Performs a given task. Returns its execution time in milliseconds

query_gpu_info(self: beatmup.Context) object

Queries information about GPU and returns a tuple of vendor and renderer strings, or None if no GPU available.

repeat_task(self: beatmup.Context, task: beatmup.AbstractTask, abort_current: bool, pool: int = 0) None

Ensures a given task executed at least once

Parameters:
  • task – The task

  • abort_current – If True and the same task is currently running, the abort signal is sent.

  • pool – A thread pool to run the task in

submit_persistent_task(self: beatmup.Context, task: beatmup.AbstractTask, pool: int = 0) int

Adds a new persistent task to the jobs queue

submit_task(self: beatmup.Context, task: beatmup.AbstractTask, pool: int = 0) int

Adds a new task to the jobs queue

wait(self: beatmup.Context, pool: int = 0) None

Blocks until all the submitted jobs are executed

wait_for_job(self: beatmup.Context, job: int, pool: int = 0) None

Blocks until a given job finishes

warm_up_gpu(self: beatmup.Context) None

Initializes GPU within a given Context if not yet (takes no effect if it already is). GPU initialization may take some time and is done when a first task using GPU is being run. Warming up the GPU is useful to avoid the app get stuck for some time when it launches its first task on GPU.

class beatmup.CustomPipeline

Custom pipeline: a sequence of tasks to be executed as a whole. Acts as an AbstractTask. Built by adding tasks one by one and calling measure() at the end.

class TaskHolder

A task within a pipeline

get_run_time(self: beatmup.CustomPipeline.TaskHolder) float

Returns last execution time in milliseconds

get_task(self: beatmup.CustomPipeline.TaskHolder) beatmup.AbstractTask

Returns the task in the current holder

add_task(self: beatmup.CustomPipeline, task: beatmup.AbstractTask) beatmup.CustomPipeline.TaskHolder

Adds a new task to the end of the pipeline

get_task(self: beatmup.CustomPipeline, index: int) beatmup.CustomPipeline.TaskHolder

Retrieves a task by its index

get_task_count(self: beatmup.CustomPipeline) int

Returns number of tasks in the pipeline

get_task_index(self: beatmup.CustomPipeline, holder: beatmup.CustomPipeline.TaskHolder) int

Retrieves task index if it is in the pipeline; returns -1 otherwise

insert_task(self: beatmup.CustomPipeline, task: beatmup.AbstractTask, before: beatmup.CustomPipeline.TaskHolder) beatmup.CustomPipeline.TaskHolder

Inserts a task in a specified position of the pipeline before another task

measure(self: beatmup.CustomPipeline) None

Determines pipeline execution mode and required thread count

remove_task(self: beatmup.CustomPipeline, task: beatmup.CustomPipeline.TaskHolder) bool

Removes a task from the pipeline, if any. Returns True on success

class beatmup.FloodFill

Discovers areas of similar colors up to a tolerance threshold around given positions (seeds) in the input image. These areas are filled with white color in another image (output). If the output bitmap is a binary mask, corresponding pixels are set to 1. The rest of the output image remains unchanged. Optionally, computes contours around the discovered areas and stores the contour positions. Also optionally, applies post-processing by dilating or eroding the discovered regions in the output image.

class BorderMorphology

Morphological postprocessing operation applied to the discovered connected components

Members:

NONE : no postprocessing

DILATE : apply a dilatation

ERODE : apply an erosion

property name
get_bounds(self: beatmup.FloodFill, arg0: tuple) tuple

Returns bounding box of the computed mask

get_contour(self: beatmup.FloodFill, index: int) beatmup.IntegerContour2D

Returns a contour by index if compute_contours was set to True, throws an exception otherwise

get_contour_count(self: beatmup.FloodFill) int

Returns number of discovered contours

property input

Input bitmap

property output

Output bitmap

set_border_postprocessing(self: beatmup.FloodFill, operation: beatmup.FloodFill.BorderMorphology, hold_radius: float, release_radius: float) None

Specifies a morphological operation to apply to the mask border.

Parameters:
  • operation – a postprocessing operation

  • hold_radius – erosion/dilation hold radius (output values set to 1)

  • release_radius – erosion/dilation radius of transition from 1 to 0

set_compute_contours(self: beatmup.FloodFill, compute: bool) None

Enables or disables contours computation

set_mask_pos(self: beatmup.FloodFill, pos: tuple) None

Specifies left-top corner position of the mask inside the input bitmap

set_seeds(self: beatmup.FloodFill, seeds: list) None

Specifies a set of seeds (starting points)

property tolerance

Intensity tolerance

class beatmup.ImageShader

A GLSL program to process images

set_source_code(self: beatmup.ImageShader, glsl: str) None
Passes new source code to the fragment shader.

The new source code will be compiled and linked when next rendering occurs.

class beatmup.IntegerContour2D

A sequence of integer-valued 2D points

add_point(self: beatmup.IntegerContour2D, x: int, y: int) None

Adds a new point to the end of the contour. Some points may be skipped to optimize the storage.

clear(self: beatmup.IntegerContour2D) None

Removes contour content

get_length(self: beatmup.IntegerContour2D) float

Returns contour length

get_point(self: beatmup.IntegerContour2D, index: int) tuple

Returns a point by its index

get_point_count(self: beatmup.IntegerContour2D) int

Returns number of points in the contour

class beatmup.InternalBitmap

Bitmap whose memory is managed by the Beatmup engine. Main pixel data container used internally by Beatmup. Applications would typically use a different incarnation of AbstractBitmap implementing I/O operations, and InternalBitmap instances are used to exchange data between different processing entities (AbstractTask instances) within the application.

class beatmup.Metric

Measures the difference between two bitmaps

class Norm

Norm (distance) to measure between two images

Members:

L1 : sum of absolute differences

L2 : Euclidean distance: square root of squared differences

property name
get_result(self: beatmup.Metric) float

Returns the measurement result (after the task is executed

static psnr(bitmap1: beatmup.AbstractBitmap, bitmap2: beatmup.AbstractBitmap) float

Computes peak signal-to-noise ratio in dB for two given images

set_bitmaps(*args, **kwargs)

Overloaded function.

  1. set_bitmaps(self: beatmup.Metric, bitmap1: beatmup.AbstractBitmap, bitmap2: beatmup.AbstractBitmap) -> None

Sets input images

  1. set_bitmaps(self: beatmup.Metric, bitmap1: beatmup.AbstractBitmap, roi1: tuple, bitmap2: beatmup.AbstractBitmap, roi2: tuple) -> None

Sets input images and rectangular regions delimiting the measurement areas

set_norm(self: beatmup.Metric, arg0: beatmup.Metric.Norm) None

Specifies the norm to use in the measurement

class beatmup.Multitask

Conditional multiple tasks execution.

Beatmup offers a number of tools allowing to pipeline several tasks into a single one. This technique is particularly useful for designing complex multi-stage image processing pipelines.

Multitask is the simplest such tool. It allows to concatenate different tasks into a linear conveyor and run them all or selectively. To handle this selection, each task is associated with a repetition policy specifying the conditions whether this given task is executed or ignored when the pipeline is running.

Specifically, there are two extreme modes that force the task execution every time (REPEAT_ALWAYS) or its unconditional skipping (IGNORE_ALWAYS) and two more sophisticated modes with the following behavior:

  • IGNORE_IF_UPTODATE skips the task if no tasks were executed among the ones coming before the current task in the pipeline;

  • REPEAT_UPDATE forces task repetition one time on next run and just after switches the repetition policy to IGNORE_IF_UPTODATE.

class RepetitionPolicy

Determines when a specific task in the sequence is run when the whole sequence is invoked

Members:

REPEAT_ALWAYS : execute the task unconditionally on each run

REPEAT_UPDATE : execute the task one time then switch to IGNORE_IF_UPTODATE

IGNORE_IF_UPTODATE : do not execute the task if no preceding tasks are run

IGNORE_ALWAYS : do not execute the task

property name
get_repetition_policy(self: beatmup.Multitask, task: beatmup.CustomPipeline.TaskHolder) beatmup.Multitask.RepetitionPolicy

Returns repetition policy of a specific task in the pipeline.

set_repetition_policy(self: beatmup.Multitask, task: beatmup.CustomPipeline.TaskHolder, policy: beatmup.Multitask.RepetitionPolicy) None

Sets repetition policy of a task. If the pipeline is processing at the moment of the call, it is the application responsibility to abort and restart it, if the policy change needs to be applied immediately.

class beatmup.PixelFormat

Specifies bitmap pixel format

Members:

SINGLE_BYTE : single channel of 8 bits per pixel (like grayscale), unsigned integer values

TRIPLE_BYTE : 3 channels of 8 bits per pixel (like RGB), unsigned integer values

QUAD_BYTE : 4 channels of 8 bits per pixel (like RGBA), unsigned integer values

SINGLE_FLOAT : single channel of 32 bits per pixel (like grayscale), single precision floating point values

TRIPLE_FLOAT : 3 channels of 32 bits per pixel, single precision floating point values

QUAD_FLOAT : 4 channels of 32 bits per pixel, single precision floating point values

BINARY_MASK : 1 bit per pixel

QUATERNARY_MASK : 2 bits per pixel

HEX_MASK : 4 bits per pixel

property name
class beatmup.Scene

An ordered set of layers representing renderable content

class BitmapLayer

Layer having an image to render. The image has a position and orientation with respect to the layer. This is expressed with an affine mapping applied on top of the layer mapping.

property bitmap

Bitmap attached to the layer

property bitmap_mapping

Bitmap geometry mapping applied on top of the layer mapping

property modulation_color

Modulation color (R, G, B, A). Multiplies bitmap pixel colors when rendering

class CustomMaskedBitmapLayer

Layer containing a bitmap and a mask applied to the bitmap when rendering. Both bitmap and mask have their own positions and orientations relative to the layer’s position and orientation.

property background_color

Background color (R, G, B, A). Fills layer pixels falling out of the mask area

property mask_mapping

Mask geometry mapping applied on top of the layer mapping

class Layer

Abstract scene layer having name, type, geometry and some content to display. The layer geometry is defined by an AffineMapping describing the position and the orientation of the layer content in the rendered image.

class Type

Layer type

Members:

SCENE : layer containing a scene

BITMAP : layer displaying a bitmap

MASKED_BITMAP : layer displaying a bitmap with mask

SHAPED_BITMAP : layer displaying a bitmap within a shape

SHADED_BITMAP : layer displaying a bitmap through a custom fragment shader

property name
get_child(*args, **kwargs)

Overloaded function.

  1. get_child(self: beatmup.Scene.Layer, x: float, y: float, recursion_depth: int = 0) -> beatmup.Scene.Layer

Picks a child layer at given point, if any

  1. get_child(self: beatmup.Scene.Layer, point: tuple, recursion_depth: int = 0) -> beatmup.Scene.Layer

Picks a child layer at given point, if any

get_type(self: beatmup.Scene.Layer) beatmup.Scene.Layer.Type

Returns layer type

property mapping

Layer mapping in parent coordinates

property phantom

If set to True, the layer goes “phantom”: it and its sublayers, if any, are ignored when searching a layer by point.

test_point(*args, **kwargs)

Overloaded function.

  1. test_point(self: beatmup.Scene.Layer, x: float, y: float) -> bool

Tests if a given point falls in the layer

  1. test_point(self: beatmup.Scene.Layer, point: tuple) -> bool

Tests if a given point falls in the layer

property visible

Controls the layer visibility. If set to False, the layer and its sublayers are ignored when rendering.

class MaskedBitmapLayer

Bitmap layer using another bitmap as a mask

property mask

Mask bitmap

class SceneLayer

Layer containing an entire scene

get_scene(self: beatmup.Scene.SceneLayer) beatmup.Scene

Returns a Scene contained in the Layer

class ShadedBitmapLayer

Bitmap layer using a custom shader

property shader

Fragment shader taking the layer bitmap as texture

class ShapedBitmapLayer

Layer containing a bitmap and a parametric mask (shape)

property border_width

Mask border thickness in pixels or normalized coordinates. These pixels are cropped out from the image and replaced with the background color.

property corner_radius

Radius of mask corners in pixels or normalized coordinates

property in_pixels

If set to True, all the parameter values are interpreted as if given in pixels. Otherwise the normalized coordinates are used.

property slope_width

Mask border slope width in pixels or normalized coordinates. The border slope is a linear transition from background color to image pixels.

add_scene(self: beatmup.Scene, arg0: beatmup.Scene) beatmup.Scene.SceneLayer

Adds a subscene to the current scene.

get_layer(*args, **kwargs)

Overloaded function.

  1. get_layer(self: beatmup.Scene, name: str) -> beatmup.Scene.Layer

Retrieves a layer by its name or None if not found

  1. get_layer(self: beatmup.Scene, index: int) -> beatmup.Scene.Layer

Retrieves a layer by its index

  1. get_layer(self: beatmup.Scene, x: float, y: float, recursion_depth: int = 0) -> beatmup.Scene.Layer

Retrieves a layer present at a specific point of the scene or None if not found

get_layer_count(self: beatmup.Scene) int

Returns total number of layers in the scene

get_layer_index(self: beatmup.Scene, layer: beatmup.Scene.Layer) int

Retrieves layer index in the scene or -1 if not found

new_bitmap_layer(*args, **kwargs)

Overloaded function.

  1. new_bitmap_layer(self: beatmup.Scene, name: str) -> beatmup.Scene.BitmapLayer

Creates a new bitmap layer

  1. new_bitmap_layer(self: beatmup.Scene) -> beatmup.Scene.BitmapLayer

Creates a new bitmap layer

new_masked_bitmap_layer(*args, **kwargs)

Overloaded function.

  1. new_masked_bitmap_layer(self: beatmup.Scene, name: str) -> beatmup.Scene.MaskedBitmapLayer

Creates a new masked bitmap layer

  1. new_masked_bitmap_layer(self: beatmup.Scene) -> beatmup.Scene.MaskedBitmapLayer

Creates a new masked bitmap layer

new_shaded_bitmap_layer(*args, **kwargs)

Overloaded function.

  1. new_shaded_bitmap_layer(self: beatmup.Scene, name: str) -> beatmup.Scene.ShadedBitmapLayer

Creates a new shaded bitmap layer

  1. new_shaded_bitmap_layer(self: beatmup.Scene) -> beatmup.Scene.ShadedBitmapLayer

Creates a new shaded bitmap layer

new_shaped_bitmap_layer(*args, **kwargs)

Overloaded function.

  1. new_shaped_bitmap_layer(self: beatmup.Scene, name: str) -> beatmup.Scene.ShapedBitmapLayer

Creates a new shaped bitmap layer

  1. new_shaped_bitmap_layer(self: beatmup.Scene) -> beatmup.Scene.ShapedBitmapLayer

Creates a new shaped bitmap layer

class beatmup.SceneRenderer

AbstractTask rendering a Scene. The rendering may be done to a given bitmap or on screen, if the platform supports on-screen rendering.

class OutputMapping

Scene coordinates to output (screen or bitmap) pixel coordinates mapping

Members:

STRETCH : output viewport covers entirely the scene axis span, aspect ratio is not preserved in general

FIT_WIDTH_TO_TOP : width is covered entirely, height is resized to keep aspect ratio, the top borders are aligned

FIT_WIDTH : width is covered entirely, height is resized to keep aspect ratio, point (0.5, 0.5) is mapped to the output center

FIT_HEIGHT : height is covered entirely, width is resized to keep aspect ratio, point (0.5, 0.5) is mapped to the output center

property name
property background_image

Image to pave the background.

property output

Output bitmap

property output_mapping

Specifies how the scene coordinates [0,1]² are mapped to the output (screen or bitmap) pixel coordinates.

property output_pixels_fetching

If set to True, the output image data is pulled from GPU to CPU memory every time the rendering is done. This is convenient if the rendered image is an application output result, and is further stored or sent through the network. Otherwise, if the image is to be further processed inside Beatmup, the pixel transfer likely introduces an unnecessary latency and may cause FPS drop in real-time rendering. Has no effect in on-screen rendering.

property output_reference_width

Value overriding output width for elements that have their size in pixels, in order to render a resolution-independent picture

pick_layer(self: beatmup.SceneRenderer, x: float, y: float, inPixels: bool) beatmup.Scene.Layer

Searches for a layer at a given position. In contrast to get_layer() it takes into account the output mapping.

Parameters:
  • x – x coordinate.

  • y – y coordinate.

  • pixels – If True, the coordinates are taken in pixels.

Returns the topmost layer at the given position if any, None if no layer found.

reset_output(self: beatmup.SceneRenderer) None

Removes a bitmap from the renderer output, if any, and switches to on-screen rendering. The rendering is done on the display currently connected to the Context running the rendering task.

property scene

Scene

class beatmup.ShaderApplicator

A task applying an image shader to bitmaps

add_sampler(self: beatmup.ShaderApplicator, bitmap: beatmup.AbstractBitmap, uniform_name: str = 'image') None

Connects a bitmap to a shader uniform variable. The bitmap connected to ImageShader::INPUT_IMAGE_ID is used to resolve the sampler type (ImageShader::INPUT_IMAGE_DECL_TYPE).

clear_samplers(self: beatmup.ShaderApplicator) None

Clears all connections of bitmaps to samplers

property output_bitmap

Output bitmap

remove_sampler(self: beatmup.ShaderApplicator, uniform_name: str) bool

Removes a sampler with a uniform variable name. Returns True if a sampler associated to the given variable existed and was removed, false otherwise.

property shader

Shader to apply to the bitmap(s)

class beatmup.WritableChunkCollection

Writable ChunkCollection implementation for Python. Allows to exchange binary data without copying.

save(self: beatmup.WritableChunkCollection, filename: str, append: bool) None

Saves the collection to a file.

Parameters:
  • filename – The name of the file to write chunks to

  • append – If True, writing to the end of the file (keeping the existing content). Rewriting the file otherwise.

beatmup.say_hi() None

Prints some greetings

beatmup.bitmaptools.chessboard(context: beatmup.Context, width: int, height: int, cell_size: int, format: beatmup.PixelFormat = <PixelFormat.BINARY_MASK: 6>) beatmup.InternalBitmap

Renders a chessboard image.

Parameters:
  • context – a Context instance

  • width – width in pixels of the resulting bitmap

  • height – height in pixels of the resulting bitmap

  • cell_size – size of a single chessboard cell in pixels

  • pixel_format – pixel format of the resulting bitmap

beatmup.bitmaptools.invert(input: beatmup.AbstractBitmap, output: beatmup.AbstractBitmap) None

Inverses colors of an image in a pixelwise fashion

beatmup.bitmaptools.make_copy(bitmap: beatmup.AbstractBitmap, context: beatmup.Context, format: beatmup.PixelFormat) beatmup.InternalBitmap

Makes a copy of a bitmap for a given Context converting the data to a given pixel format. Can be used to exchange image content between different instances of Context. The copy is done in an AbstractTask run in the default thread pool of the source bitmap context.

Parameters:
  • bitmap – the bitmap to copy

  • context – the Context instance the copy is associated with

  • format – pixel format of the copy

beatmup.bitmaptools.make_opaque(bitmap: beatmup.AbstractBitmap, area: tuple) None

Makes a bitmap area opaque

beatmup.bitmaptools.noise(*args, **kwargs)

Overloaded function.

  1. noise(bitmap: beatmup.AbstractBitmap) -> None

Fills a given bitmap with random noise.

  1. noise(bitmap: beatmup.AbstractBitmap, area: tuple) -> None

Replaces a rectangular area in a bitmap by random noise.

Goes through a bitmap in scanline order (left to right, top to bottom) until a pixel of a given color is met.

Parameters:
  • source – the bitmap to scan

  • value – the color value to look for

  • start_from – starting pixel position

Returns the next closest position of the searched value (in scanline order) or None if not found.

beatmup.filters module

ColorMatrix

Color matrix filter: applies an affine mapping Ax + B at each pixel of a given image in RGBA space

PixelwiseFilter

Base class for image filters processing a given bitmap in a pixelwise fashion.

Sepia

Sepia filter: an example of PixelwiseFilter implementation.

class beatmup.filters.ColorMatrix

Color matrix filter: applies an affine mapping Ax + B at each pixel of a given image in RGBA space

apply_contrast(self: beatmup.filters.ColorMatrix, factor: float) None

Applies a contrast adjustment by a given factor on top of the current transformation

set_brightness(self: beatmup.filters.ColorMatrix, brightness: float) None

Sets a brightness adjustment by a given factor (non-cumulative with respect to the current transformation)

set_coefficients(self: beatmup.filters.ColorMatrix, out_channel: int, add: float, rgba: tuple) None

Sets color matrix coefficients for a specific output color channel

set_color_inversion(self: beatmup.filters.ColorMatrix, preserved_hue: tuple, saturation_factor: float, value_factor: float) None

Resets the current transformation to a fancy color inversion mode with a fixed hue point

set_hsv_correction(self: beatmup.filters.ColorMatrix, hue_shift_degrees: float, saturation_factor: float, value_factor: float) None

Resets the current transformation to a matrix performing standard HSV correction

class beatmup.filters.PixelwiseFilter

Base class for image filters processing a given bitmap in a pixelwise fashion.

property input

Input bitmap

property output

Output bitmap

class beatmup.filters.Sepia

Sepia filter: an example of PixelwiseFilter implementation.

beatmup.gl module

TextureHandler

A texture stored in GPU memory

VariablesBundle

Collection storing GLSL program parameters (scalars, matrices, vectors) to communicate them from user to GPU-managing thread

class beatmup.gl.TextureHandler

A texture stored in GPU memory

get_depth(self: beatmup.gl.TextureHandler) int

Returns depth of the texture in pixels

get_height(self: beatmup.gl.TextureHandler) int

Returns height of the texture in pixels

get_number_of_channels(self: beatmup.gl.TextureHandler) int

Returns number of channels containing in the texture

get_width(self: beatmup.gl.TextureHandler) int

Returns width of the texture in pixels

class beatmup.gl.VariablesBundle

Collection storing GLSL program parameters (scalars, matrices, vectors) to communicate them from user to GPU-managing thread

set_float(*args, **kwargs)

Overloaded function.

  1. set_float(self: beatmup.gl.VariablesBundle, name: str, value: float) -> None

Sets a scalar float uniform value

  1. set_float(self: beatmup.gl.VariablesBundle, name: str, x: float, y: float) -> None

Sets a 2D float uniform vector value

  1. set_float(self: beatmup.gl.VariablesBundle, name: str, x: float, y: float, z: float) -> None

Sets a 3D float uniform vector value

  1. set_float(self: beatmup.gl.VariablesBundle, name: str, x: float, y: float, z: float, w: float) -> None

Sets a 4D float uniform vector value

set_float_array(self: beatmup.gl.VariablesBundle, name: str, values: List[float]) None

Sets a float array variable value

set_float_matrix2(self: beatmup.gl.VariablesBundle, name: str, matrix: List[float]) None

Sets a float 2*2 matrix variable value

set_float_matrix3(self: beatmup.gl.VariablesBundle, name: str, matrix: List[float]) None

Sets a float 3*3 matrix variable value

set_float_matrix4(self: beatmup.gl.VariablesBundle, name: str, matrix: List[float]) None

Sets a float 4*4 matrix variable value

set_integer(*args, **kwargs)

Overloaded function.

  1. set_integer(self: beatmup.gl.VariablesBundle, name: str, value: int) -> None

Sets a scalar integer uniform value

  1. set_integer(self: beatmup.gl.VariablesBundle, name: str, x: int, y: int) -> None

Sets a 2D integer uniform vector value

  1. set_integer(self: beatmup.gl.VariablesBundle, name: str, x: int, y: int, z: int) -> None

Sets a 3D integer uniform vector value

  1. set_integer(self: beatmup.gl.VariablesBundle, name: str, x: int, y: int, z: int, w: int) -> None

Sets a 4D integer uniform vector value

beatmup.nnets module

ActivationFunction

Activation function specification

AbstractOperation

Abstract neural net operation (layer).

Classifier

Image classifier base class.

Conv2D

2D convolution operation computed on GPU.

Dense

Dense (linear) layer.

DeserializedModel

Model reconstructed from a serialized representation.

ImageSampler

Image preprocessing operation.

InferenceTask

Task running inference of a Model

Model

Neural net model.

Padding

Zero padding specification

Pooling2D

2D pooling operation computed on GPU.

Softmax

Softmax layer.

class beatmup.nnets.AbstractOperation

Abstract neural net operation (layer). Has a name used to refer the operation in a Model. The operation data (such as convolution weights) is provided through a ChunkCollection in single precision floating point format, where the chunks are searched by operation name. Operations have several inputs and outputs numbered starting from zero.

property input_count

Number of operation inputs

property name

Operation name

property output_count

Number of operation outputs

class beatmup.nnets.ActivationFunction

Activation function specification

Members:

DEFAULT : default activation: 0..1 bounded ReLU (identity clipped to 0..1 range)

BRELU6 : 0.167 times identity clipped to 0..1 range

SIGMOID_LIKE : a piecewise-linear sigmoid function approximation

property name
class beatmup.nnets.Classifier

Image classifier base class. Makes a runnable AbstractTask from a Model. Adds an image input and a vector of probabilities for output.

get_probabilities(self: beatmup.nnets.Classifier) List[float]

Returns the last classification results (vector of probabilities per class).

start(self: beatmup.nnets.Classifier, arg0: beatmup.AbstractBitmap) int

Initiates the classification of a given image. The call is non-blocking.

Parameters:

input – The input image

Returns a job corresponding to the submitted task.

class beatmup.nnets.Conv2D

2D convolution operation computed on GPU. Has 2 inputs: main and residual (detailed below), and a single output. Constraints:

  • Input and output are 3D tensors with values in [0, 1] range sampled over 8 bits.

  • Number of input feature maps is 3 or a multiple of 4.

  • Number of output feature maps is a multiple of 4.

  • For group convolutions, each group contains a multiple of 4 input channels and a multiple of 4 output channels, or exactly 1 input and 1 output channel (i.e., depthwise).

  • Kernels are of square shape.

  • Strides are equal along X and Y.

  • Dilations are equal to 1.

  • If an image is given on input (3 input feature maps), only valid padding is supported.

  • An activation function is always applied on output.

Raspberry Pi-related constraints:

  • Pi cannot sample more than 256 channels to compute a single output value. Actual practical limit is yet lower: something about 128 channels for pointwise convolutions and less than 100 channels for bigger kernels. When the limit is reached, Pi OpenGL driver reports an out of memory error (0x505).

Features:

  • Bias addition integrated.

  • An optional residual input is available: a tensor of output shape added to the convolution result before applying the activation function.

property use_bias

Returns true if bias addition is enabled

class beatmup.nnets.Dense

Dense (linear) layer. Computes A*x + b for input feature vector x, a matrix A and an optional bias vector b. Accepts a GL::Vector or a flat Storage view on input, amd only a GL::Vector on output.

class beatmup.nnets.DeserializedModel

Model reconstructed from a serialized representation. The representation format is the one rendered with Model::serialize(): a YAML-like listing containing “ops” and “connections” sections describing the model operations in execution order and connections between them respectively (see NNetsModelSerialization).

class beatmup.nnets.ImageSampler
Image preprocessing operation.

Samples an image of a fixed size from an arbitrary size texture. Has three key missions. * If enabled, performs a center crop keeping the output aspect ratio (otherwise the input is stretched to fit the output). * If enabled, uses linear interpolation when possible to reduce aliasing (otherwise nearest neighbor sampling is used). * Brings support of OES textures. This allows for example to read data directly from camera in Android.

property rotation

Number of times a clockwise rotation by 90 degree is applied to the input image

class beatmup.nnets.InferenceTask

Task running inference of a Model

connect(*args, **kwargs)

Overloaded function.

  1. connect(self: beatmup.nnets.InferenceTask, image: beatmup.AbstractBitmap, op_name: str, input_index: int = 0) -> None

    Connects an image to a specific operation input. Ensures the image content is up-to-date in GPU memory by the time the inference is run.

    image:

    the image

    op_name:

    the operation name

    input_index:

    the input index of the operation

  2. connect(self: beatmup.nnets.InferenceTask, image: beatmup.AbstractBitmap, operation: beatmup.nnets.AbstractOperation, input_index: int = 0) -> None

    Connects an image to a specific operation input. Ensures the image content is up-to-date in GPU memory by the time the inference is run.

    image:

    The image

    operation:

    The operation

    input_index:

    The input index of the operation

class beatmup.nnets.Model

Neural net model. Contains a list of operations and programmatically defined interconnections between them using addConnection(). Enables access to the model memory at any point in the model through addOutput() and getModelData(). The memory needed to store internal data during the inference is allocated automatically; storages are reused when possible. The inference of a Model is performed by InferenceTask.

add_connection(self: beatmup.nnets.Model, source_op: str, dest_op: str, output: int = 0, input: int = 0, shuffle: int = 0) None

Adds a connection between two given ops.

Parameters:
  • source_op – name of the operation emitting the data

  • dest_op – name of the operation receiving the data

  • output – output number of the source operation

  • input – input number of the destination operation

  • shuffle – if greater than zero, the storage is shuffled. For shuffle = n, the output channels are sent to the destination operation in the following order: 0, 1, 2, 3, 4n, 4n+1, 4n+2, 4n+3, 8n, 8n+1, 8n+2, 8n+3, …, 4, 5, 6, 7, 4n+4, 4n+5, 4n+6, 4n+7, 8n+4, …

add_operation(self: beatmup.nnets.Model, op_name: str, new_op: beatmup.nnets.AbstractOperation) None

Adds a new operation to the model before another operation in the execution order. The Model does not takes ownership of the passed pointer. The new operation is not automatically connected to other operations.

Parameters:
  • op_name – name of the operation the new operation is inserted before

  • new_op – the new operation

add_output(*args, **kwargs)

Overloaded function.

  1. add_output(self: beatmup.nnets.Model, operation: str, output: int = 0) -> None

    Enables reading output data from the model memory through get_output_data(). A given operation output is connected to a storage that might be accessed by the application after the run.

    operation:

    name of the operation to get data from

    output:

    the operation output index

  2. add_output(self: beatmup.nnets.Model, operation: beatmup.nnets.AbstractOperation, output: int = 0) -> None

    Enables reading output data from the model memory through get_output_data(). A given operation output is connected to a storage that might be accessed by the application after the run.

    operation:

    operation to get data from. If not in the model, an exception is thrown.

    output:

    the operation output index

append(self: beatmup.nnets.Model, new_op: beatmup.nnets.AbstractOperation, connect: bool = False) None

Adds a new operation to the model. The operation is added to the end of the operations list. The execution order corresponds to the addition order.

Parameters:
  • new_op – the new operation

  • connect – if True, the main operation input is connected to the operation output

count_multiply_adds(self: beatmup.nnets.Model) int

Provides an estimation of the number of multiply-adds characterizing the model complexity.

count_texel_fetches(self: beatmup.nnets.Model) int

Provides an estimation of the total number of texels fetched by all the operations in the model per image.

get_first_operation(self: beatmup.nnets.Model) beatmup.nnets.AbstractOperation

Returns the first operation in the model

get_last_operation(self: beatmup.nnets.Model) beatmup.nnets.AbstractOperation

Returns the last operation in the model

get_output_data(*args, **kwargs)

Overloaded function.

  1. get_output_data(self: beatmup.nnets.Model, op_name: str, output: int = 0) -> object

    Reads data from the model memory. add_output() is needed to be called first in order to enable reading the data. Otherwise None is returned.

    op_name:

    name of the operation to get data from

    output:

    the operation output index

    Returns data array or None.

  2. get_output_data(self: beatmup.nnets.Model, operation: beatmup.nnets.AbstractOperation, output: int = 0) -> object

    Reads data from the model memory. add_output() is needed to be called first in order to enable reading the data. Otherwise None is returned.

    operation:

    the operation to get data from

    output:

    the operation output index

    Returns data array or None.

serialize(self: beatmup.nnets.Model) str

Returns serialized representation of the model as a string.

class beatmup.nnets.Padding

Zero padding specification

Members:

SAME : operation output size matches its input size for unit strides

VALID : no zero padding

property name
class beatmup.nnets.Pooling2D

2D pooling operation computed on GPU. Has a single input and a single output. Constraints:

  • Input and output are 3D tensors with values in [0, 1] range sampled over 8 bits.

  • Number of feature maps is a multiple of 4.

  • Pooling area is of square shape.

  • Strides are equal along X and Y.

  • Average pooling only accepts valid zero padding,

Raspberry Pi-related constraints:

  • Pi cannot sample more than 256 channels to compute a single output value. Actual practical limit is yet lower: pooling size may be limited by 10. When the limit is reached, Pi OpenGL driver reports an out of memory error (0x505).

class Operator

Pooling operator specification

Members:

MAX : max pooling

AVERAGE : average pooling

property name
class beatmup.nnets.Softmax

Softmax layer. It does not have output, but acts as a sink. The resulting probabilities are returned by getProbabilities(). This operation is executed on CPU.

get_probabilities(self: beatmup.nnets.Softmax) List[float]

Returns the list of probabilities