Take Advantage of “Tensorflow Object Detection API”- Part 1!!!

Sahil Chachere
3 min readJan 11, 2020

Part 1-Setting up system environment.

Tensorflow Logo

Note: I have followed these steps on Ubuntu 16.04 and Ubuntu 18.04 systems.

before proceeding for actual process our system must have some required libraries as follows:

I. first check whether the system has a ‘protobuf-compiler’ library in the ubuntu system by hitting ‘protoc — version’ command in a terminal if you get the result “libprotoc 3.0.0" you are ready to jump for the main process, but if in case it fails to show version then install protobuf-compiler by using

sudo apt install protobuf-compiler’

if you still face any version compatibility issue for protoc then, download the latest python version of protoc from here and issue following commands.

this process is lengthy, so be patient during execution:-

$ sudo ./configure
$ sudo make check
$ sudo make install

Welcome to the world of TensorFlow!!!

Cloning Process

1. The very first step is to clone the ‘models’ repository from this link, then by using the git clone command in a local system, you will have the models directory.

$ git clone https://github.com/tensorflow/models.git

2. once the models repo cloned to the system, you can see the models directory, within the models you will find the ‘research’ folder along with other stuff and directories, but our main goal is to work with the research directory at least for this tutorial.

Protoc Process

3. the very next step is to play with some .proto files which you will find inside “path_to_the/models/research. now just change your current working directory to path_to_the/models/research$ and from within the research directory hit the command:

$ protoc object_detection/protos/*.proto --python_out=.

if the 3rd process is successfully executed you won't find any errors in a terminal, but if you do so then please install protobuf-compiler properly and repeat the 3rd process again. after the execution of 3rd process, you will also be seeing .py files in

Export Path Process

4. This is a very important process if you don’t want to see any errors like “No module named nets” in the future while using models repo and training your Neural Networks. so to proceed with this, all you have to hit the command sudo vim ~/.bashrc, this will open bashrc file with vim editor, in the bashrc file all you have to paste this:

export PYTHONPATH=$PYTHONPATH:"path_to_the/models/research/":"path_to_the/models/research/slim"

once you write and quit bashrc, don’t forget to hit the command “source ~/.bashrc”, it will automatically execute the bashrc file every time you open the terminal.

Testing Process

5. this is the last and very important process to know whether our hard work really paid off or not??? so to check it out, run the following python script path_to_the/models/research

python object_detection/builders/model_builder_test.py

if the above step runs successfully, you will see the following result in the terminal:

That's it, this is all about Part 1 of the series, here we have created a Training and testing environment that is required in order to train TensorFlow models, and for inferencing using either pre-trained models or custom-trained models.

In Part 2, I will be showing how can we use TensorFlow’s pre-trained models for inferencing.

--

--

Sahil Chachere

Data Science, Machine Learning, Deep Learning, MLOps