The Deep Dive

Training AI Models for Tree Detection with Minimal Annotations

Written by Gwihwan Moon | Sep 15, 2024 9:25:11 AM

In this article, we will explore the process of training an AI model with Deep Block utilizing only 100 training masks.

 

Project Preparation

Previously, we demonstrated an AI model trained with just 100 annotations to identify trees in satellite images.

 

You can start this project by visiting the following link: https://app.deepblock.net/deepblock/store/project/plilkeuxglzp9ll13

First, we should copy the project to the console.

After a short while, the project is successfully copied to the Console page, and you can open it.

Model Training

Upon opening the project, you'll see that 641 masks are marked on the satellite images in the PREDICT mode.

This project is a tree segmentation project trained with 100 tree masks, and it still hasn't identified some trees in the satellite images.

Move the satellite image to "TRAIN" mode allows us to repurpose the inference results as training data.

In other words, the model trained with 100 annotations generated 641 training masks through the inference, which can now be used to further train the model with more data.

After moving the file and reviewing the training data, let's retrain the model using this data.

Click the TRAIN button to begin training the model.

After a few minutes, the training process will be completed.

Model Inference

Once training is finished, move the satellite images from TRAIN mode back to PREDICT mode.

Let's see how accurately the model, now trained with 641 masks, can identify trees in the satellite images.

Click the PREDICT button.

After some time, the analysis will be completed and the inference results will be displayed.

Now, the model is able to identify a total of 1,236 tree masks in the satellite images.

As you can see, the model can now detect more trees that it previously missed.

Even with a limited amount of training data, you can train an AI model and enhance its performance through iterative inference and retraining.