Neural Network (Inference Overlay): Difference between revisions

From Tygron Preview Support Wiki
Jump to navigation Jump to search
No edit summary
 
(6 intermediate revisions by the same user not shown)
Line 1: Line 1:
[[File:inference_overlay_neural_network.jpg|thumb|right|Selecting a Neural Network in the [[Inference Overlay]] Wizard]]
[[File:inference_overlay_neural_network.jpg|thumb|right|Selecting a Neural Network in the [[Inference Overlay]] Wizard]]
A Neural Network in the {{software}} is a pre-trained convolution network<ref name="Cheatsheet"/> that can be used by an [[Inference Overlay|AI Inference Overlay]] to classify or detect patterns and features given one or more input [[Overlay]]s.
A Neural Network in the {{software}} is a pre-trained convolution network<ref name="Cheatsheet"/> that can be used by an [[Inference Overlay|AI Inference Overlay]] to classify or detect features given one or more input [[Overlay]]s.
Neural Networks are stored in the {{software}} as data [[item]]s with a reference to an [[ONNX]]-file (Open Neural Network Exchange format<ref name="ONNX"/>) visible via Netron<ref name="Netron"/>.
Neural Networks are stored in the {{software}} as data [[item]]s with a reference to an [[ONNX]]-file (Open Neural Network Exchange format<ref name="ONNX"/>) visible via Netron<ref name="Netron"/>.


[[Input tensor (Inference Overlay)|Input]] and [[Output tensor (Inference Overlay)|output]] for neural networks is handled using data tensors. These tensors are multi-dimensional data arrays. They are automatically identified when selecting or adding a new Neural Network.
[[Input tensor (Inference Overlay)|Input]] and [[Output tensor (Inference Overlay)|output]] for neural networks is handled using data tensors. These tensors are multi-dimensional data arrays. They are automatically identified when selecting or adding a new Neural Network.


Whether a Neural Network classifies or detects objects given an input depends on its inference model. Such a model consists using AI-software, such as [[PyTorch]].  
Whether a Neural Network classifies or detects objects given an input depends on its inference model. Such a model consists using AI-software, such as [[PyTorch]]. Neural Networks can indicate what type of network they are by defining the [[Inference mode (Inference Overlay)|INFERENCE_MODE]] attribute in their metadata.


===Supported Convolution Types===
===Supported Convolution Types===
Line 14: Line 14:
#* Predicts probabilities of features and where they are located
#* Predicts probabilities of features and where they are located


===Parameters===
===Parameters in Metadata===
Neural Networks can also store default parameters for Inference Overlay, such that they are setup more properly once set for an Inference Overlay. The following parameters are used:
Neural Networks can also store default parameters for Inference Overlay, such that they are setup more properly once set for an Inference Overlay. The following parameters are used:
* Producer and version
* Description
* Preferred min and max [[Grid cell size]] (m).
* Preferred min and max [[Grid cell size]] (m).
* [[Model attributes (Inference Overlay)|Inference Overlay attributes]].
* [[Model attributes (Inference Overlay)|Inference Overlay attributes]].
* [[Inference Overlay]]'s legend [[Labels result type (Inference Overlay)|labels]], with corresponding value and color (in hex-format).
* [[Inference Overlay]]'s legend [[Labels result type (Inference Overlay)|labels]], with corresponding value and color (in hex-format).
* Maximum amount of detectable features per inference window.


{{article end
{{article end
Line 24: Line 27:
* [[ONNX]]
* [[ONNX]]
* [[PyTorch]]
* [[PyTorch]]
|howtos=*[[How to adjust a Neural Networks metadata]]
|references=<references>
|references=<references>
<ref name="ONNX">ONNX ∙ found at: https://onnx.ai/ (last visited: 2024-09-21)</ref>
<ref name="ONNX">ONNX ∙ found at: https://onnx.ai/ (last visited: 2024-09-21)</ref>

Latest revision as of 11:58, 19 December 2024

Selecting a Neural Network in the Inference Overlay Wizard

A Neural Network in the Tygron Platform is a pre-trained convolution network[1] that can be used by an AI Inference Overlay to classify or detect features given one or more input Overlays. Neural Networks are stored in the Tygron Platform as data items with a reference to an ONNX-file (Open Neural Network Exchange format[2]) visible via Netron[3].

Input and output for neural networks is handled using data tensors. These tensors are multi-dimensional data arrays. They are automatically identified when selecting or adding a new Neural Network.

Whether a Neural Network classifies or detects objects given an input depends on its inference model. Such a model consists using AI-software, such as PyTorch. Neural Networks can indicate what type of network they are by defining the INFERENCE_MODE attribute in their metadata.

Supported Convolution Types

  1. Image Classification
    • Classifies a picture using labels, in combination with a predicted probability per label
  2. Detection (with masks and bounding boxes)
    • Detects up to several features in a picture
    • Predicts probabilities of features and where they are located

Parameters in Metadata

Neural Networks can also store default parameters for Inference Overlay, such that they are setup more properly once set for an Inference Overlay. The following parameters are used:

How-to's

See also

References

  1. Cheatsheet ∙ found at: https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks (last visited: 2024-09-21)
  2. ONNX ∙ found at: https://onnx.ai/ (last visited: 2024-09-21)
  3. Netron ∙ found at: https://netron.app/ (last visited: 2024-10-14)