Output tensor (Inference Overlay): Difference between revisions
Jump to navigation
Jump to search
No edit summary |
No edit summary |
||
Line 9: | Line 9: | ||
{{article end | {{article end | ||
|notes=*The inference overlay wizard will report an issue when a | |notes=*The inference overlay wizard will report an issue when a result type is referenced by a tensor link and it is not yet add to the Inference Overlay. | ||
}} | }} | ||
{{InferenceOverlay nav}} | {{InferenceOverlay nav}} |
Revision as of 10:19, 9 October 2024
An Output tensor is a multi-dimensional data array that serves as output for Neural Networks. Generally these output tensors are results of the operations applied by the convolution neural network on the input tensors. Depending on the type of neural network, multiple type of output tensors can be generated.
In the Tygron Platform, the following output tensors are supported:
Notes
- The inference overlay wizard will report an issue when a result type is referenced by a tensor link and it is not yet add to the Inference Overlay.