How to train your own AI model for an Inference Overlay
Jump to navigation
Jump to search
To train your own Mask R-CNN AI model for an Inference Overlay in the Tygron Platform, you will need datasets to train on.
To export these datasets using your projects in the Tygron Platform, follow this how-to.
Tygron AI Suite
A Tygron AI Suite is available at github. This repository contains the necessary files to configure a Conda environment to train a new Mask R-CNN AI model.
Anaconda Navigator will be used to manage a Conda environment and run Jupyter Notebooks with python.
Example notebooks provided in the repository can be used as a basis to train your own model.
How to train your own AI model for an Inference Overlay:
- Either clone the repository [1], or simply download the zip
- Open (and optionally unzip) the folder containing the local repository.
- Download and install Anaconda Navigator. Download it at [2]
- Open the Anaconda Navigator
- Select the tab Environments and click on the Import button. This will download all necessary dependencies and may take a while.
- Select the file "tygronai.yml" from the local repository of the tygron-ai-suite. This file will automatically setup the environment needed to open, edit and execute the Jupyter Notebooks.
- Once configured, select the Home tab
- Open either the JupyterLab or Jupyter Notebook application. You might need to install it first by click on the install button.
- A browser will open with the selected application.
- Browse to the folder of the tygron-ai-suite repository and select the "example_config.ipynb".
- Press the double arrow button named Restart kernel and execute all cells to run the Jupyter Notebook. See the images below of what to expect.
- Eventually an ONNX file will be created.
Notes
- Once an ONNX file is created, open the folder containing this file and drag this file into the Tygron Client Application to automatically import it with an Inference Overlay.
- In the step with the Configuration of the example_config notebook, cuda should be printed. If not, cpu will be printed. Training a model on your cpu is significantly slower. You might need to open the command line tool on your computer and run the command nvidia-smi to find out what version of cuda is supported on your gpu.