Object detection using YoloV5 (Part 2)

Introduction of YoloV5

YOLOv5 is the first of the YOLO models which is written in the PyTorch framework and it’s simple and easy to use. YoloV5 is an objection detection which is already trained on a large dataset and thousands of classes. Yolov5 is basically used for object detection which is used for detecting objects in images as well as videos. We can directly train the YoloV5 model for object detection to detect the objects. We can use the YoloV5 model in the two different ways: 

  1. We can train the YoloV5 model and use it for object detection in images as well as videos. 
  2. We can train the YoloV5 model on the custom data set. We can train a YoloV5 model with a custom data set of our own choice. For example: If we want to detect only one object like a car, we can train the YoloV5 model with the car annotated data set. 
  3. And I have already explained in part 1 . What is annotation for object detection? If you don't read that post, so can read the previous post before reading this.


Note: If you haven't read my previous post about the data set preparation and annotation of the data set. Just go and read that post before reading it.


How to annotate and prepare the data set: Click here to read

Let's start :-

First of all download the data set from the link below, which is provided at the end of the blog. After downloading the data set, keep the data set where your code file has been kept.

Then clone the GitHub library of YoloV5 objection detection. This is specially implemented for object detection which is already trained on millions of data sets and thousands of classes. Here we are going to implement object detection on a custom data set of two wheeler and four wheeler vehicles. Two wheeler vehicles are bike, scooter and four wheeler vehicles are cars only.

After cloning the YoloV5 repository then a folder created in your google colab notebook working directory, then go to the YoloV5 repository using the cd command.

Then install all the requirements which are present in the requirements.txt file.

Import all required libraries like torch.



Requirements code
 
Mount a Google Drive into the Google colab notebook to access the data set because my data set has been kept in the Google Drive. You can directly save the data set into the Google colab but it is saved on Google colab for a temporary period until your Google colab notebook is running. 

Now the most important step is to create a file which stores the number of classes, class names and data set location. 

 Firstly, Go to YoloV5 directory -> Go to data directory -> double click on the coco128.yaml file -> file opened -> clean all data using ctrl+A and backspace -> Then copy the below code and paste into the file, change only the data set path (train and val). -> save it and close the file.

  • train: /content/drive/MyDrive/object data/images/train   # 128 images
  • val: /content/drive/MyDrive/object data/images/val  # 128 images

  • # number of classes
  • nc: 2

  • # class names
  • names: [ 'Four Wheeler' , 'Two Wheeler' ]

Now it’s time to train the YoloV5 object detection model on a custom data set. Just use below code to train the model. Model will be trained for 100 epochs.



Train code

After training the model for 100 epochs, your model’s weights will be saved in a location which you can see at the end of the training output. Saved model path would be like ( runs/train/exp/weights/last.pt ).

Now check the actual image and predicted image. How has the bounding box been created on the images?. See the below images.



Actual Image

Predicted Image

Now it’s time to test the object detection YoloV5 model. Firstly, test the model on a single image. Use of below code and give the path of the image and execute it. After execution of code, your predicted image is saved at a location which has been shown at below output. Path would be like (runs/detect/exp). You can see below the predicted image.

Test model on image

Now it’s time to test the YoloV5 custom object detection model on the custom video. Just use below code to see the result on video. Change only the path of the video. At the time of execution of code, video will be converted into the frames. After the complete execution of code, your predicted video is saved at a location which has been shown at the below output. Location will be like ( runs/detect/exp ). You can see the result in the video below.

Test model on video

Source code and how to use :-

  1. Go to GitHub and download or fork repository : YoloV5 GitHub
  2. Download dataset : Download dataset
  3. Open google colab : Google Colab 
  4. Now open .ipnyb file in Google Colab and keep data set at same location

Video Tutorial of YoloV5


Thank you !!!!!!!! 





If you have any doubts, Please let me know

Post a Comment (0)
Previous Post Next Post