This Week
Team Goal: Setup development environment
Personal Goal: Get familiar with OpenMV-H7 R1 camera module
My main goal setting out this first week was simply to setup a working development environment for the OpenMV-H7 module.
The camera module features an STM32H743 microcontroller, micro SD card slot, and an OV7725 image sensor. The sensor is capable of taking 640x480 color images at 75 FPS, which should be more than fast enough for the Set Classifier.
Most importantly, the microcontroller runs MicroPython, meaning development can be done at a high level with Python instead of C. It has support for TensorFlow Lite, so our team will need to figure out how to create our CNN in such a way that it is TensorFlow Lite compatible to run on the OpenMV board. In any case, I expect this to be much easier than implementing a CNN in C code like in the first miniproject.
I followed a tutorial from DigiKey in order to get started with the board. Development is primarily done in the OpenMV IDE. Connecting up the board was super simple, and a few clicks later I was running the helloworld_1.py example script that comes with the IDE. It simply streams the video feed from the camera to the computer and displays the FPS.
Just from moving the camera around by hand, I identified a couple possible issues:
The camera must be focused by hand via twisting the attached lens. If the camera is moved slightly closer or further from the focus point, the image gets blurry. While the shape and color could still be discerned, the stripes on cards with striped infills blurred together. Depending on how much trouble this gives our infill classification system, we may need to find a way to mount the camera on a tripod so it can stay in focus.
The cards are somewhat reflective, making the video easily prone to glare depending on the ambient light and viewing angle. We may need to investigate if there are ways to dynamically adjust the camera's settings depending on ambient light to minimize the effect of glare on classification.
Next Week
Team Goal: Run first iteration of respective code assignments
Personal Goal: Isolate a single card out from the background
Originally, part of our team's week 3 milestone was getting the camera hardware running, since we anticipated needing to do a lot of involved debugging with C in CubeIDE to successfully capture an image. However, upon actually using the OpenMV camera board, I found it is super easy to accomplish this and have already finished the week 3 milestone.
Given this, my partner Bradley and I discussed rebalancing the workload. I will take on the task of isolating the portions of the frame corresponding to each card, which can then be supplied to the CNN Bradley is developing for classification. Going into next week I will do some initial research on localization and attempt to isolate one card from the background.
Great work Tyler! You have made excellent progress :)