Class NeuralNetwork
Defined in File NeuralNetwork.hpp
Inheritance Relationships
Base Type
public dai::NodeCRTP< Node, NeuralNetwork, NeuralNetworkProperties >(Template Class NodeCRTP)
Class Documentation
-
class NeuralNetwork : public dai::NodeCRTP<Node, NeuralNetwork, NeuralNetworkProperties>
NeuralNetwork node. Runs a neural inference on input data.
Public Functions
-
void setBlobPath(const dai::Path &path)
Load network blob into assets and use once pipeline is started.
- Throws:
Error – if file doesn’t exist or isn’t a valid network blob.
- Parameters:
path – Path to network blob
-
void setBlob(OpenVINO::Blob blob)
Load network blob into assets and use once pipeline is started.
- Parameters:
blob – Network blob
-
void setBlob(const dai::Path &path)
Same functionality as the setBlobPath(). Load network blob into assets and use once pipeline is started.
- Throws:
Error – if file doesn’t exist or isn’t a valid network blob.
- Parameters:
path – Path to network blob
-
void setNumPoolFrames(int numFrames)
Specifies how many frames will be available in the pool
- Parameters:
numFrames – How many frames will pool have
-
void setNumInferenceThreads(int numThreads)
How many threads should the node use to run the network.
- Parameters:
numThreads – Number of threads to dedicate to this node
-
void setNumNCEPerInferenceThread(int numNCEPerThread)
How many Neural Compute Engines should a single thread use for inference
- Parameters:
numNCEPerThread – Number of NCE per thread
-
int getNumInferenceThreads()
How many inference threads will be used to run the network
- Returns:
Number of threads, 0, 1 or 2. Zero means AUTO
Public Members
-
Input input = {*this, "in", Input::Type::SReceiver, true, 5, true, {{DatatypeEnum::Buffer, true}}}
Input message with data to be inferred upon Default queue is blocking with size 5
-
Output out = {*this, "out", Output::Type::MSender, {{DatatypeEnum::NNData, false}}}
Outputs NNData message that carries inference results
-
Output passthrough = {*this, "passthrough", Output::Type::MSender, {{DatatypeEnum::Buffer, true}}}
Passthrough message on which the inference was performed.
Suitable for when input queue is set to non-blocking behavior.
-
InputMap inputs
Inputs mapped to network inputs. Useful for inferring from separate data sources Default input is non-blocking with queue size 1 and waits for messages
-
OutputMap passthroughs
Passthroughs which correspond to specified input
Public Static Attributes
-
static constexpr const char *NAME = "NeuralNetwork"
-
void setBlobPath(const dai::Path &path)