Seeed WioTerminal AI grove camera

In this tutorial, we will see how to make an image clustering based AI with the Seeed Wio Terminal and the Grove AI camera. There are different tutorial on this, the main source of this tutorial is located here. The purpose of this is to have a step by-step approach for my student to realize this in a limited time they have.

This is based on SeeedStudio K1100 development kit.

Get image of what you want to classify

At first we need to get picture of what we want to classify. Today we are going to classify poker token and we want to have 3 classes:

  • Blue token
  • Red token
  • No Token

The background is a white page, the objective is to take different picture, your cellphone is great for this. Image needs to be resized like 800×600. Create about 90 pictures in the 3 classes.

The “No Token” class is a class where we can have nothing but also different objects or poker token with other colors.

Example of poker token image, BLUE, RED, EMPTY class

You can find the entire sample data set I used in the Wio Terminal AI project repository on github.

Create a classification for YOLO-5 model

For this we are going to use Roboflow online application. You need to create an account and create a project, select Object Detection, upload your images, save & continue, then assign to yourself for annotation, then start annotate the images.

roboflow steps

For the annotation, select a square around the interesting part of the image and select the right tag, add it when you reach a new one. Tags are BLUE, RED, OTHER.

roboflow dataset

Once done, add the image in your datasets this will create a data set, some of the images will be used for training and some other for validation and test.

Our data set is really small and much more images would be better, but … we do with what we have .

You can select generate a new version and make some modifications on the images, starting by a reduction to 192×192 pixels, best is to select Fill(with center crop) option.

You can also use Augmentation to generate more images from the data set. Enable on bounding box : Rotation, Blur, Noise with the default parameters. This will generate 207 images instead of the 87 initial images.

Now we can export this data set with the following settings:

Export the data set from Roboflow in PyTorsh Yolov5 format

A popup windows will be display containing a snippet to access the data set for the training. Do not close this popup.

Train the model on the data set

We are going to do the training on Google with a pre-configured environment

Train the model on Google Collab

In the step 4, you need to copy and paste the content of the snippet from roboflow to select your data-set. In case the workspace field is empty you will get an error. You need to set the workspace. It’s the name you give at the registration. You can find it in the url when you click on “Show Public View”, before the model name and version, in the URL path.

Then you need to execute all the steps of the notebook, one by one, read the comments to understand what is executed on every steps. The step 6 is training the model and requires about 5 minutes.

After this you will generate a UF2 file that can be used on the Grove AI Camera. One point to be noticed, if the download fails, you may have to change the location on the step 8. I have also added a %pwd command to see where we are.

# Place the model to index 1
!python3 uf2conv.py -f GROVEAI -t 1 -c runs/train/yolov5n6_results/weights/best-int8.tflite -o model-1.uf2
%cp model-1.uf2 ../../
%pwd

Once we have the UF2 we can continue. The -t 1 above indicates that our model is stored at index 1 of the firmware, this will be used later.

Load the AI firmware into the camera

Once connected on USB, the camera can be switched into bootloader mode by double clicking on the button as indicated on this picture:

Set Grove AI camera in boot loader mode

At first, make sure you have the default firmware (grove_ai_v02-00.uf2)on the camera. Download it and upload it to the camera as indicated above. Then you can upload the generated UF2 file from the previous step.

A drive will be mounted on your computer with GROVEAI name. You can drag & drop the UF2 file previously generated into this drive for flashing the camera.

On recent version of MacOsX you may have an error message when doing drag & drop, so you can do a manual copy from a terminal:

$ cp -X model-1.uf2 /Volumes/GROVEAI/
$ or rsync

The GROVEAI drive will disappear indicating the firmware has been uploaded.

Make a Wio Terminal program using it

You need to first install the Seeed Arduino GroveAI library : download Code as a ZIP then in Arduino as library from Zip file. Then you can get the source code from the github repository.

Compile and push the firmware on the Wio Terminal, connect the camera to the Wio Terminal on the grove connector located on the left side. If you want to use a LoRa-E5 module with it, you can connect it on the right side.

Wio Terminal with Grove AI Camera connected

Then your can run the code, open the serial monitor to watch what is happening. If you get “failed to find camera” it may be due to a problem of firmware upload in the camera. check this.

Then you should see the classification in the serial monitor

Result on the serial monitor of the poker token classification

It’s a bit hard to correctly position the camera if you see nothing on a screen. so you can connect the camera to the computer USB and use this website in a Chrome browser to see how it is positioned. I did not succeed in making it working on Firefox.

Watch the camera analysis and capture in real time on a browser with the USB port

What students need to do next ?

So, this is the beginning of the practice, once this is working, you need to make the measurement stable: as you can see, event when nothing is moving the classification change and this is really true for the blue token (check the blue vs black picture in the sample to understand why). Basically we need to detect stable situation.

Once we have a stable situation, we can count token passing through the camera and we can increase counter every time a token is passing. So we may have 3 counters : total token, total blue, total red.

Once we have this, you need to report these 3 counters at once on LoRaWan on regular basis, let say en every 5 minutes.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.