Beatmup
Beatmup::NNets::InferenceTask Class Reference

Task running inference of a Model. More...

#include <inference_task.h>

Inheritance diagram for Beatmup::NNets::InferenceTask:
Beatmup::GpuTask Beatmup::BitmapContentLock Beatmup::AbstractTask Beatmup::Object Beatmup::NNets::Classifier

Public Member Functions

 InferenceTask (Model &model, ChunkCollection &data)
 
void connect (AbstractBitmap &image, AbstractOperation &operation, int inputIndex=0)
 Connects an image to a specific operation input. More...
 
void connect (AbstractBitmap &image, const std::string &operation, int inputIndex=0)
 
- Public Member Functions inherited from Beatmup::Object
virtual ~Object ()
 

Protected Attributes

ChunkCollectiondata
 
Modelmodel
 

Private Member Functions

void beforeProcessing (ThreadIndex threadCount, ProcessingTarget target, GraphicPipeline *gpu) override
 Instruction called before the task is executed. More...
 
void afterProcessing (ThreadIndex threadCount, GraphicPipeline *gpu, bool aborted) override
 Instruction called after the task is executed. More...
 
bool processOnGPU (GraphicPipeline &gpu, TaskThread &thread) override
 Executes the task on GPU. More...
 
bool process (TaskThread &thread) override
 Executes the task on CPU within a given thread. More...
 
ThreadIndex getMaxThreads () const override
 Gives the upper limint on the number of threads the task may be performed by. More...
 
- Private Member Functions inherited from Beatmup::BitmapContentLock
 BitmapContentLock ()
 
 ~BitmapContentLock ()
 
void readLock (GraphicPipeline *gpu, AbstractBitmap *bitmap, ProcessingTarget target)
 Locks content of a bitmap for reading using a specific processing target device. More...
 
void writeLock (GraphicPipeline *gpu, AbstractBitmap *bitmap, ProcessingTarget target)
 Locks content of a bitmap for writing using a specific processing target device. More...
 
void unlock (AbstractBitmap *bitmap)
 Drops a lock to the bitmap. More...
 
void unlockAll ()
 Unlocks all the locked bitmaps unconditionally. More...
 
template<const ProcessingTarget target>
void lock (GraphicPipeline *gpu, AbstractBitmap *input, AbstractBitmap *output)
 
void lock (GraphicPipeline *gpu, ProcessingTarget target, AbstractBitmap *input, AbstractBitmap *output)
 
template<const ProcessingTarget target>
void lock (GraphicPipeline *gpu, std::initializer_list< AbstractBitmap * > read, std::initializer_list< AbstractBitmap * > write)
 
template<typename ... Args>
void unlock (AbstractBitmap *first, Args ... others)
 

Private Attributes

std::map< std::pair< AbstractOperation *, int >, AbstractBitmap * > inputImages
 

Additional Inherited Members

- Public Types inherited from Beatmup::AbstractTask
enum class  TaskDeviceRequirement { CPU_ONLY , GPU_OR_CPU , GPU_ONLY }
 Specifies which device (CPU and/or GPU) is used to run the task. More...
 
- Static Public Member Functions inherited from Beatmup::AbstractTask
static ThreadIndex validThreadCount (int number)
 Valid thread count from a given integer value. More...
 

Detailed Description

Task running inference of a Model.

During the firs run of this task with a given model the shader programs are built and the memory is allocated. The subsequent runs are much faster.

Definition at line 33 of file inference_task.h.

Constructor & Destructor Documentation

◆ InferenceTask()

Beatmup::NNets::InferenceTask::InferenceTask ( Model model,
ChunkCollection data 
)
inline

Definition at line 48 of file inference_task.h.

Member Function Documentation

◆ beforeProcessing()

void InferenceTask::beforeProcessing ( ThreadIndex  threadCount,
ProcessingTarget  target,
GraphicPipeline gpu 
)
overrideprivatevirtual

Instruction called before the task is executed.

Parameters
threadCountNumber of threads used to perform the task
targetDevice used to perform the task
gpuA graphic pipeline instance; may be null.

Reimplemented from Beatmup::AbstractTask.

Definition at line 31 of file inference_task.cpp.

31  {
32  for (auto it : inputImages)
33  readLock(gpu, it.second, ProcessingTarget::GPU);
34  model.prepare(*gpu, data);
35 }
void readLock(GraphicPipeline *gpu, AbstractBitmap *bitmap, ProcessingTarget target)
Locks content of a bitmap for reading using a specific processing target device.
std::map< std::pair< AbstractOperation *, int >, AbstractBitmap * > inputImages
virtual void prepare(GraphicPipeline &gpu, ChunkCollection &data)
Prepares all operations: reads the model data from chunks and builds GPU programs.
Definition: model.cpp:143

◆ afterProcessing()

void InferenceTask::afterProcessing ( ThreadIndex  threadCount,
GraphicPipeline gpu,
bool  aborted 
)
overrideprivatevirtual

Instruction called after the task is executed.

Parameters
threadCountNumber of threads used to perform the task
gpuGPU to be used to execute the task; may be null.
abortedtrue if the task was aborted

Reimplemented from Beatmup::AbstractTask.

Definition at line 38 of file inference_task.cpp.

38  {
39  if (gpu)
40  gpu->flush();
41  unlockAll();
42 }
void unlockAll()
Unlocks all the locked bitmaps unconditionally.
void flush()
Waits until all operations submitted to GPU are finished.
Definition: pipeline.cpp:931

◆ processOnGPU()

bool InferenceTask::processOnGPU ( GraphicPipeline gpu,
TaskThread thread 
)
overrideprivatevirtual

Executes the task on GPU.

Parameters
gpugraphic pipeline instance
threadassociated task execution context
Returns
true if the execution is finished correctly, false otherwise

Reimplemented from Beatmup::AbstractTask.

Definition at line 45 of file inference_task.cpp.

45  {
46  model.execute(thread, &gpu);
47  return true;
48 }
void execute(TaskThread &thread, GraphicPipeline *gpu)
Runs the inference.
Definition: model.cpp:339

◆ process()

bool InferenceTask::process ( TaskThread thread)
overrideprivatevirtual

Executes the task on CPU within a given thread.

Generally called by multiple threads.

Parameters
threadassociated task execution context
Returns
true if the execution is finished correctly, false otherwise

Reimplemented from Beatmup::GpuTask.

Definition at line 51 of file inference_task.cpp.

51  {
52  model.execute(thread, nullptr);
53  return true;
54 }

◆ getMaxThreads()

ThreadIndex Beatmup::NNets::InferenceTask::getMaxThreads ( ) const
inlineoverrideprivatevirtual

Gives the upper limint on the number of threads the task may be performed by.

The actual number of threads running a specific task may be less or equal to the returned value, depending on the number of workers in ThreadPool running the task.

Reimplemented from Beatmup::GpuTask.

Definition at line 41 of file inference_task.h.

41 { return MAX_THREAD_INDEX; }
static const ThreadIndex MAX_THREAD_INDEX
maximum possible thread index value
Definition: parallelism.h:71

◆ connect() [1/2]

void InferenceTask::connect ( AbstractBitmap image,
AbstractOperation operation,
int  inputIndex = 0 
)

Connects an image to a specific operation input.

Ensures the image content is up-to-date in GPU memory by the time the inference is run.

Parameters
[in]imageThe image
[in]operationThe operation
[in]inputIndexThe input index of the operation

Definition at line 25 of file inference_task.cpp.

25  {
26  inputImages[std::make_pair(&operation, inputIndex)] = &image;
27  operation.setInput(image, inputIndex);
28 }
virtual void setInput(Storage::View &&storage, int index=0)
Definition: operation.cpp:52

◆ connect() [2/2]

void Beatmup::NNets::InferenceTask::connect ( AbstractBitmap image,
const std::string &  operation,
int  inputIndex = 0 
)
inline

Definition at line 58 of file inference_task.h.

58  {
59  connect(image, model.getOperation(operation), inputIndex);
60  }
void connect(AbstractBitmap &image, AbstractOperation &operation, int inputIndex=0)
Connects an image to a specific operation input.
OperationClass & getOperation(const std::string &operationName)
Retrieves an operation by its name.
Definition: model.h:303

Member Data Documentation

◆ inputImages

std::map<std::pair<AbstractOperation*, int>, AbstractBitmap*> Beatmup::NNets::InferenceTask::inputImages
private

Definition at line 35 of file inference_task.h.

◆ data

ChunkCollection& Beatmup::NNets::InferenceTask::data
protected

Definition at line 44 of file inference_task.h.

◆ model

Model& Beatmup::NNets::InferenceTask::model
protected

Definition at line 45 of file inference_task.h.


The documentation for this class was generated from the following files: