Neural Network (Inference Overlay): Difference between revisions
No edit summary |
No edit summary |
||
(13 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
{{ | [[File:inference_overlay_neural_network.jpg|thumb|right|Selecting a Neural Network in the [[Inference Overlay]] Wizard]] | ||
A Neural Network in the {{software}} is a pre-trained convolution network<ref name="Cheatsheet"/> that can be used by an [[Inference Overlay|AI Inference Overlay]] to classify or detect patterns and features given one or more input [[Overlay]]s. | |||
Neural Networks are stored in the {{software}} as data [[item]]s with a reference to an [[ONNX]]-file (Open Neural Network Exchange format<ref name="ONNX"/>) visible via Netron<ref name="Netron"/>. | |||
[[Input tensor (Inference Overlay)|Input]] and [[Output tensor (Inference Overlay)|output]] for neural networks is handled using data tensors. These tensors are multi-dimensional data arrays. They are automatically identified when selecting or adding a new Neural Network. | |||
Whether a Neural Network classifies or detects objects given an input depends on its inference model. Such a model consists using AI-software, such as [[PyTorch]]. | |||
===Supported Convolution Types=== | |||
# Image Classification | |||
#* Classifies a picture | |||
#* Predicts probability of object | |||
# Detection (with masks and bounding boxes) | |||
#* Detects up to several objects in a picture | |||
#* Predicts probabilities of objects and where they are located | |||
===Parameters=== | |||
Neural Networks can also store default parameters for Inference Overlay, such that they are setup more properly once set for an Inference Overlay. The following parameters are used: | |||
* Preferred [[Grid cell size]] (m). | |||
* [[Model attributes (Inference Overlay)|Inference Overlay attributes]]. | |||
* [[Inference Overlay]]'s legend [[Labels result type (Inference Overlay)|labels]]. | |||
{{article end | |||
|seealso= | |||
* [[ONNX]] | |||
* [[PyTorch]] | |||
|references=<references> | |||
<ref name="ONNX">ONNX ∙ found at: https://onnx.ai/ (last visited: 2024-09-21)</ref> | |||
<ref name="Cheatsheet">Cheatsheet ∙ found at: https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks (last visited: 2024-09-21)</ref> | |||
<ref name="Netron">Netron ∙ found at: https://netron.app/ (last visited: 2024-10-14)</ref> | |||
</references> | |||
}} | |||
{{InferenceOverlay nav}} | {{InferenceOverlay nav}} |
Latest revision as of 14:38, 15 October 2024
A Neural Network in the Tygron Platform is a pre-trained convolution network[1] that can be used by an AI Inference Overlay to classify or detect patterns and features given one or more input Overlays. Neural Networks are stored in the Tygron Platform as data items with a reference to an ONNX-file (Open Neural Network Exchange format[2]) visible via Netron[3].
Input and output for neural networks is handled using data tensors. These tensors are multi-dimensional data arrays. They are automatically identified when selecting or adding a new Neural Network.
Whether a Neural Network classifies or detects objects given an input depends on its inference model. Such a model consists using AI-software, such as PyTorch.
Supported Convolution Types
- Image Classification
- Classifies a picture
- Predicts probability of object
- Detection (with masks and bounding boxes)
- Detects up to several objects in a picture
- Predicts probabilities of objects and where they are located
Parameters
Neural Networks can also store default parameters for Inference Overlay, such that they are setup more properly once set for an Inference Overlay. The following parameters are used:
- Preferred Grid cell size (m).
- Inference Overlay attributes.
- Inference Overlay's legend labels.
See also
References
- ↑ Cheatsheet ∙ found at: https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks (last visited: 2024-09-21)
- ↑ ONNX ∙ found at: https://onnx.ai/ (last visited: 2024-09-21)
- ↑ Netron ∙ found at: https://netron.app/ (last visited: 2024-10-14)