LPRNet is an end-to-end method for Automatic License Plate Recognition without preliminary character segmentation.
- Ubuntu* 16.04
- Python* 3.6
- TensorFlow* 1.13.1
- OpenVINO™ 2019 R1 with Python API
-
Create and activate virtual environment:
cd $(git rev-parse --show-toplevel)/tensorflow_toolkit/lpr virtualenv venv -p python3 --prompt="(lpr)" echo ". /opt/intel/openvino/bin/setupvars.sh" >> venv/bin/activate . venv/bin/activate
-
Install the modules:
pip3 install -e . pip3 install -e ../utils
In the case without GPU, use the
CPU_ONLY=true
environment variable:CPU_ONLY=true pip3 install -e . pip3 install -e ../utils
-
Download and prepare required submodules:
bash ../prepare_modules.sh
Predefined configuration for Chinese license plates recognition:
- Configuration file: tensorflow_toolkit/lpr/chinese_lp/config.py.
- Trained model: LPRNet 94x24.
Training dataset: Synthetic Chinese License Plates
To train a model, go through the steps described in the following sections.
-
Download training data and extract it in the
data/synthetic_chinese_license_plates
folder. The dataset must consist from a folder with training images namedcrops
and a text file with annotations namedannotation
. Use the commands below:cd $(git rev-parse --show-toplevel)/data/synthetic_chinese_license_plates wget https://download.01.org/opencv/openvino_training_extensions/datasets/license_plate_recognition/Synthetic_Chinese_License_Plates.tar.gz tar xf Synthetic_Chinese_License_Plates.tar.gz
-
After extracting the training data archive, run the Python script from
data/synthetic_chinese_license_plates/make_train_val_split.py
to split the annotations intotrain
andval
by passing the path todata/synthetic_chinese_license_plates/annotation
file from archive as an input. The script outputs thedata/synthetic_chinese_license_plates/train
anddata/synthetic_chinese_license_plates/val
annotation files with full paths to images and labels in the folder with extracted data. Use the command below:python3 make_train_val_split.py Synthetic_Chinese_License_Plates/annotation
The resulting structure of the folder:
./data/synthetic_chinese_license_plates/ ├── make_train_val_split.py └── Synthetic_Chinese_License_Plates/ ├── annotation ├── crops/ │ ├── 000000.png | ... ├── LICENSE ├── README ├── train └── val
-
To start the training process, use the command below:
cd $(git rev-parse --show-toplevel)/tensorflow_toolkit/lpr python3 tools/train.py chinese_lp/config.py
-
To start from a pretrained checkpoint,use the command below:
wget https://download.01.org/opencv/openvino_training_extensions/models/license_plate_recognition/license-plate-recognition-barrier-0007.tar.gz tar xf license-plate-recognition-barrier-0007.tar.gz python3 tools/train.py chinese_lp/config.py \ --init_checkpoint license-plate-recognition-barrier-0007/model.ckpt
-
To start evaluation process, use the command below:
python3 tools/eval.py chinese_lp/config.py
NOTE Before taking the step 4, make sure that the
eval.file_list_path
parameter inlpr/chinese_lp/config.py
points out to the file with annotations to test on. Take the step 4 in another terminal, so training and evaluation are performed simultaneously. -
Training and evaluation artifacts are stored by default in
lpr/chinese_lp/model
. To visualize training and evaluation, run thetensorboard
with the command below:tensorboard --logdir=./model
Then view results in a browser: http://localhost:6006.
To run the model via OpenVINO™, freeze the TensorFlow graph and then convert it to the OpenVINO™ Intermediate Representation (IR) using the Model Optimizer:
python3 tools/export.py --data_type FP32 --output_dir model/export chinese_lp/config.py
Default export path:
lpr/model/export_<step>/frozen_graph
- path to the frozen graph
lpr/model/export_<step>/IR/<data_type>
- path to the model converted to the IR format
NOTE: Input data for inference should be set via the
infer.file_list_path
parameter intensorflow_toolkit/lpr/chinese_lp/config.py
and must look like a text file with a list of paths to license plates images in the following format:
path_to_lp_image1
path_to_lp_image2
...
When the training is complete, the model from the checkpoint can be inferred on the
input data by running tensorflow_toolkit/lpr/chinese_lp/infer.py
:
python3 tools/infer_checkpoint.py chinese_lp/config.py
python3 tools/infer.py --model model/export/frozen_graph/graph.pb.frozen \
--config chinese_lp/config.py \
<image_path>
python3 tools/infer_ie.py --model model/export/IR/FP32/lpr.xml \
--device=CPU \
--cpu_extension="${INTEL_OPENVINO_DIR}/deployment_tools/inference_engine/lib/intel64/libcpu_extension_avx2.so" \
--config chinese_lp/config.py \
<image_path>
If you find LPRNet useful in your research, please, consider to cite the following paper:
@article{icv2018lprnet,
title={LPRNet: License Plate Recognition via Deep Neural Networks},
author={Sergey Zherzdev and Alexey Gruzdev},
journal={arXiv:1806.10447},
year={2018}
}