|
| 1 | +--- |
| 2 | +title: Blob Detection with OpenMV |
| 3 | +difficulty: intermediate |
| 4 | +tags: [OpenMV, Blob Detection, Machine Vision] |
| 5 | +description: This tutorial will show you how to use the Nicla Vision to detect the presence and the position of objects in a camera image. |
| 6 | +author: Sebastian Romero |
| 7 | +--- |
| 8 | + |
| 9 | +## Overview |
| 10 | +In this tutorial you will use the Arduino® Nicla Vision to detect the presence and the position of objects in a camera image. For that you will use a technique referred to as blob detection. For this task you will write a MicroPython script and run it on the Nicla Vision with the help of the OpenMV IDE. |
| 11 | + |
| 12 | +## Goals |
| 13 | +- Learn how to use the OpenMV IDE to run MicroPython on Nicla Vision |
| 14 | +- Learn how to use the built-in blob detection algorithm of OpenMV |
| 15 | +- Learn how to use MicroPython to toggle the built-in LEDs |
| 16 | + |
| 17 | +### Required Hardware and Software |
| 18 | +- Nicla Vision board (<https://store.arduino.cc/products/nicla-vision>) |
| 19 | +- Micro USB cable (either USB A to Micro USB or USB C to Micro USB) |
| 20 | +- OpenMV IDE 2.6.4+ |
| 21 | + |
| 22 | +## Nicla Vision and the OpenMV IDE |
| 23 | + |
| 24 | +The OpenMV IDE was built for Machine Vision applications. It is meant to provide an Arduino like experience for simple computer vision tasks using a camera sensor. OpenMV comes with its own firmware that is built on MicroPython. Among other hardware it supports the Nicla Vision board. OpenMV allows you to easily preview the camera stream and visually inspect colour ranges to define thresholds for your machine vision scripts. [Here](https://openmv.io/) you can read more about the OpenMV IDE. |
| 25 | + |
| 26 | +## Instructions |
| 27 | + |
| 28 | +### Configuring the Development Environment |
| 29 | +Before you can start programming OpenMV scripts for the Portenta you need to download and install the OpenMV IDE. Open the [OpenMV download](https://openmv.io/pages/download) page in your browser and download the version that you need for your operating system. Please Follow the instructions of the installer. |
| 30 | + |
| 31 | +### Flashing the OpenMV Firmware |
| 32 | + |
| 33 | +Connect the Portenta to your computer via the USB cable if you haven't done so yet. Put the Nicla Vision in bootloader mode by double pressing the reset button on the board. The built-in LED will start fading in and out. Now open the OpenMV IDE. |
| 34 | + |
| 35 | + |
| 36 | + |
| 37 | +Click on the "connect" symbol at the bottom of the left toolbar. |
| 38 | + |
| 39 | + |
| 40 | + |
| 41 | +A pop-up will ask you how you would like to proceed "DFU bootloader(s) found. What would you like to do?". Select "Reset Firmware to Release Version". This will install the latest OpenMV firmware on the board. If it asks you whether it should erase the internal file system you can click "No". |
| 42 | + |
| 43 | + |
| 44 | + |
| 45 | +The board's LED will start flashing while the OpenMV firmware is being uploaded. A pop up window will open which shows you the upload progress. Wait until the LED stops flashing and fading. You will see a message saying "DFU firmware update complete!" when the process is done. |
| 46 | + |
| 47 | +***Installing the OpenMV firmware will overwrite any existing sketches in the internal Flash. As a result the board's port won't be exposed in the Arduino IDE anymore. To re-flash an Arduino firmware you need to put the board into bootloader mode. To do so double press the reset button on the board. The built-in LED will start fading in and out. In bootloader mode you will see the board's port again in the Arduino IDE.*** |
| 48 | + |
| 49 | +The Nicla Vision will start flashing its blue LED when it's ready to be connected. After confirming the completion dialog the board should already be connected to the OpenMV IDE, otherwise click the "connect" button (plug icon) once again. |
| 50 | + |
| 51 | + |
| 52 | + |
| 53 | + |
| 54 | +## Blob Detection |
| 55 | + |
| 56 | +In this section you will learn how to use the built-in blob detection algorithm to detect the location of objects in an image. That algorithm allows to detect areas in a digital image that differ in properties such as brightness or color compared to surrounding areas. These areas are called blobs. Think of a blob as a lump of similar pixels. |
| 57 | + |
| 58 | +Application Examples: |
| 59 | + |
| 60 | +- Detect specific vehicles passing in front of the camera |
| 61 | +- Detect missing pieces in an assembly line |
| 62 | +- Detect insect infestation on vegetables |
| 63 | + |
| 64 | +To find blobs you need to feed an image from the camera to the algorithm. It will then analyse it and output the coordinates of the found blobs. You will visualize these coordinates directly on the image and indicate whether a blob was found by using the red and green LED. |
| 65 | + |
| 66 | +### 1. Prepare the Script |
| 67 | + |
| 68 | +Create a new script by clicking the "New File" button in the toolbar on the left side. Import the required modules: |
| 69 | + |
| 70 | +```python |
| 71 | +import pyb # Import module for board related functions |
| 72 | +import sensor # Import the module for sensor related functions |
| 73 | +import image # Import module containing machine vision algorithms |
| 74 | +import time # Import module for tracking elapsed time |
| 75 | +``` |
| 76 | + |
| 77 | +A module in Python is a confined bundle of functionality. By importing it into the script it gets made available. |
| 78 | + |
| 79 | +### 2. Preparing the Sensor |
| 80 | + |
| 81 | +In order to take a snapshot with the camera it has to be configured in the script. |
| 82 | + |
| 83 | +```python |
| 84 | +sensor.reset() # Resets the sensor |
| 85 | +sensor.set_pixformat(sensor.RGB565) # Sets the sensor to RGB |
| 86 | +sensor.set_framesize(sensor.QVGA) # Sets the resolution to 320x240 px |
| 87 | +sensor.set_vflip(True) # Flips the image vertically |
| 88 | +sensor.set_hmirror(True) # Mirrors the image horizontally |
| 89 | +sensor.skip_frames(time = 2000) # Skip some frames to let the image stabilize |
| 90 | +``` |
| 91 | + |
| 92 | +The most relevant functions in this snipped are `set_pixformat` and `set_framesize`. The camera that comes with the Nicla Vision supports RGB 565 images. Therefore we need to set it via the `sensor.RGB565` parameter. |
| 93 | + |
| 94 | +The resolution of the camera needs to be set to a supported format both by the sensor and the algorithm. `QVGA` is a good trade-off between performance and resolution so you will use that in this tutorial. |
| 95 | + |
| 96 | +Depending on how you hold the camera you may want to play with the `set_vflip` and `set_hmirror` functions. To hold the board with the USB cable facing down you will need to call `set_vflip(True)`. If you want the image to be displayed the same way as you see it with your eyes, you need to call `sensor.set_hmirror(True)`. Otherwise elements in the image such as text would be mirrored. |
| 97 | + |
| 98 | +### 3. Defining the Color Thresholds |
| 99 | + |
| 100 | +In order to feed the blob detection algorithm with an image you have to take a snapshot from the camera or load the image from memory (e.g. SD card or internal Flash). In this case you will take a snapshot using the `snapshot()` function. The resulting image needs then to be fed to the algorithm using the `find_blobs` function. You will notice that a list of tuples gets passed to the algorithm. In this list you can specify the LAB color values that are mostly contained in the object that you would like to track. If you were for example to detect purely red objects on a black background the resulting range of colors would be very narrow. The corresponding LAB value for pure red is roughly (53,80,67). A slightly brighter red could be (55,73,50). Therefore the LAB range would be L: 53-55 A: 73-80 B: 50-67. OpenMV provides a convenient tool to figure out the desired color ranges: Threshold Editor. You can find it in the OpenMV IDE in the menu under **Tools > Machine Vision > Threshold Editor**. Place the desired object in front of the camera and open the tool. When it asks you about the "Source image location?" select "Frame Buffer". In the window that opens you will see a snapshot from the camera and a few sliders to adjust the LAB color ranges. As you move the sliders you will see in the black and white image on the right hand side which of the pixels would match the set color range. White pixels denote the matching pixels. As you can see in the following example, the pixels of a nice red apple on brown background are very nicely clustered. It results in mostly one big blob. |
| 101 | + |
| 102 | + |
| 103 | + |
| 104 | + |
| 105 | + |
| 106 | +As opposed to the example above with the apple, the clustering of the banana's pixels is slightly less coherent. This is because the banana lies on a background that has slightly similar color. That means that the algorithm is sensitive to the background pixels. In order to exclude blobs that don't belong to the target object additional filtering is necessary. You can for example set a minimum bounding box size, a blob pixel density, define the elongation of the object, its roundness or even just look for objects in a specific part of the image. |
| 107 | + |
| 108 | + |
| 109 | + |
| 110 | +### 4. Detecting Blobs |
| 111 | + |
| 112 | +Now that you know the range of color values to be used to find the blobs you can pass these 6 tuples as a list to the `find_blobs` function: |
| 113 | + |
| 114 | +```python |
| 115 | +# Define the min/max LAB values we're looking for |
| 116 | +thresholdsApple = (24, 60, 32, 54, 0, 42) |
| 117 | +thresholdsBanana = (45, 75, 5, -10, 40, 12) |
| 118 | +img = sensor.snapshot() # Takes a snapshot and saves it in memory |
| 119 | + |
| 120 | +# Find blobs with a minimal area of 50x50 = 2500 px |
| 121 | +# Overlapping blobs will be merged |
| 122 | +blobs = img.find_blobs([thresholdsApple, thresholdsBanana], area_threshold=2500, merge=True) |
| 123 | +``` |
| 124 | + |
| 125 | +Once the blobs are detected you may be interested in seeing where in the image they were found. This can be done by drawing directly on the camera image. |
| 126 | + |
| 127 | +```python |
| 128 | +# Draw blobs |
| 129 | +for blob in blobs: |
| 130 | + # Draw a rectangle where the blob was found |
| 131 | + img.draw_rectangle(blob.rect(), color=(0,255,0)) |
| 132 | + # Draw a cross in the middle of the blob |
| 133 | + img.draw_cross(blob.cx(), blob.cy(), color=(0,255,0)) |
| 134 | +``` |
| 135 | + |
| 136 | +If you need to know which blob matched which color threshold you can use the `blob.code()` function (see [here](https://docs.openmv.io/library/omv.image.html#image.image.blob.blob.code) for more information). |
| 137 | + |
| 138 | +The result of that will be visible in the Frame Buffer preview panel on the right side of the OpenMV IDE. |
| 139 | + |
| 140 | + |
| 141 | + |
| 142 | +### 4. Toggling LEDs |
| 143 | + |
| 144 | +What if you want some visual feedback from the blob detection without any computer connected to your board? You could use for example the built-in LEDs to indicate whether or not a blob was found in the camera image. Let's initialise the red and the green LEDs with the following code: |
| 145 | + |
| 146 | +```python |
| 147 | +ledRed = pyb.LED(1) # Initiates the red led |
| 148 | +ledGreen = pyb.LED(2) # Initiates the green led |
| 149 | +``` |
| 150 | + |
| 151 | +And then add the logic that will turn on the appropriate LED if a blob is present. This part of the code will be added after the "Draw Blobs" logic. |
| 152 | + |
| 153 | +```python |
| 154 | +# Turn on green LED if a blob was found |
| 155 | +if len(blobs) > 0: |
| 156 | + ledGreen.on() |
| 157 | + ledRed.off() |
| 158 | +else: |
| 159 | +# Turn the red LED on if no blob was found |
| 160 | + ledGreen.off() |
| 161 | + ledRed.on() |
| 162 | +``` |
| 163 | + |
| 164 | +In this example the green LED will light up when there is at least one blob found in the image. The red LED will light up if no blob could be found. |
| 165 | + |
| 166 | +### 5. Uploading the Script |
| 167 | +Let's program the board with the complete script and test if the algorithm works. Copy the following script and paste it into the new script file that you created. |
| 168 | + |
| 169 | +```python |
| 170 | +import pyb # Import module for board related functions |
| 171 | +import sensor # Import the module for sensor related functions |
| 172 | +import image # Import module containing machine vision algorithms |
| 173 | +import time # Import module for tracking elapsed time |
| 174 | + |
| 175 | +sensor.reset() # Resets the sensor |
| 176 | +sensor.set_pixformat(sensor.RGB565) # Sets the sensor to RGB |
| 177 | +sensor.set_framesize(sensor.QVGA) # Sets the resolution to 320x240 px |
| 178 | +sensor.set_vflip(True) # Flips the image vertically |
| 179 | +sensor.set_hmirror(True) # Mirrors the image horizontally |
| 180 | +sensor.skip_frames(time = 2000) # Skip some frames to let the image stabilize |
| 181 | + |
| 182 | +# Define the min/max LAB values we're looking for |
| 183 | +thresholdsApple = (24, 60, 32, 54, 0, 42) |
| 184 | +thresholdsBanana = (45, 75, 5, -10, 40, 12) |
| 185 | + |
| 186 | +ledRed = pyb.LED(1) # Initiates the red led |
| 187 | +ledGreen = pyb.LED(2) # Initiates the green led |
| 188 | + |
| 189 | +clock = time.clock() # Instantiates a clock object |
| 190 | + |
| 191 | +while(True): |
| 192 | + clock.tick() # Advances the clock |
| 193 | + img = sensor.snapshot() # Takes a snapshot and saves it in memory |
| 194 | + |
| 195 | + # Find blobs with a minimal area of 50x50 = 2500 px |
| 196 | + # Overlapping blobs will be merged |
| 197 | + blobs = img.find_blobs([thresholdsApple, thresholdsBanana], area_threshold=2500, merge=True) |
| 198 | + |
| 199 | + # Draw blobs |
| 200 | + for blob in blobs: |
| 201 | + # Draw a rectangle where the blob was found |
| 202 | + img.draw_rectangle(blob.rect(), color=(0,255,0)) |
| 203 | + # Draw a cross in the middle of the blob |
| 204 | + img.draw_cross(blob.cx(), blob.cy(), color=(0,255,0)) |
| 205 | + |
| 206 | + # Turn on green LED if a blob was found |
| 207 | + if len(blobs) > 0: |
| 208 | + ledGreen.on() |
| 209 | + ledRed.off() |
| 210 | + else: |
| 211 | + # Turn the red LED on if no blob was found |
| 212 | + ledGreen.off() |
| 213 | + ledRed.on() |
| 214 | + |
| 215 | + pyb.delay(50) # Pauses the execution for 50ms |
| 216 | + print(clock.fps()) # Prints the framerate to the serial console |
| 217 | +``` |
| 218 | + |
| 219 | +Click on the "Play" button at the bottom of the left toolbar. Place some objects on your desk and check if the Portenta can detect them. |
| 220 | + |
| 221 | +***The MicroPython script doesn't get compiled and linked into an actual firmware. Instead it gets copied to the internal Flash of the board where it gets interpreted and executed on the fly.*** |
| 222 | + |
| 223 | +## Conclusion |
| 224 | + |
| 225 | +In this tutorial you learned how to use the OpenMV IDE to develop MicroPython scripts that then run on the Nicla Vision. You also learned how to configure the camera of the Nicla Vision to be used for machine vision applications in OpenMV. Last but not least you learned how to interact with the built-in LEDs in MicroPython on the OpenMV firmware. |
| 226 | + |
| 227 | +### Next Steps |
| 228 | +- Familiarize yourself with the OpenMV IDE. There are many other features that didn't get mentioned in this tutorial (e.g. the Serial Terminal). |
| 229 | +- Try out other machine vision examples that come with the OpenMV IDE. You can find them in the "Examples" menu. |
| 230 | + |
| 231 | +## Troubleshooting |
| 232 | +### OpenMV Firmware Flashing Issues |
| 233 | +- If the upload of the OpenMV firmware fails during the download, put the board back in boot loader mode and try again. Give it a few tries until the firmware gets successfully uploaded. |
| 234 | +- If the upload of the OpenMV firmware fails without even starting, try uploading the latest firmware using the "Load Specific Firmware File" option. You can find the latest firmware on the [OpenMV Github repository](https://github.com/openmv/openmv/releases). Look for a file called **firmware.bin** in the NVISION folder. |
| 235 | +- If the camera cannot get recognized by the OpenMV IDE or if you see a "No OpenMV Cams found!" message, press the reset button of the board once and wait until you see the blue LED flashing. Then try again connecting to the board. |
| 236 | +- If you see a "OSError: Reset Failed" message, reset the board by pressing the reset button. Wait until you see the blue LED flashing, connect the board to the OpenMV IDE and try running the script again. |
0 commit comments