doc: update stdc step-by-step

This commit is contained in:
EricChunYi 2022-04-14 18:19:02 +08:00
parent ec2289d5c4
commit b6108b81cb

View File

@ -11,7 +11,7 @@
**Note:** You need to run `pip uninstall mmcv` first if you have `mmcv` installed. **Note:** You need to run `pip uninstall mmcv` first if you have `mmcv` installed.
If mmcv and mmcv-full are both installed, there will be `ModuleNotFoundError`. If mmcv and mmcv-full are both installed, there will be `ModuleNotFoundError`.
## Step 1-2: Install MMSegmentationKN ## Step 1-2: Install kneron-mmsegmentation
### Step 1-2-1: Install PyTorch ### Step 1-2-1: Install PyTorch
@ -37,14 +37,14 @@ pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9
If you see error messages while installing mmcv-full, please check if your installation instruction matches your installed version of PyTorch and Cuda, and see [MMCV pip Installation Instruction](https://github.com/open-mmlab/mmcv#install-with-pip) for different versions of MMCV compatible to different PyTorch and CUDA versions. If you see error messages while installing mmcv-full, please check if your installation instruction matches your installed version of PyTorch and Cuda, and see [MMCV pip Installation Instruction](https://github.com/open-mmlab/mmcv#install-with-pip) for different versions of MMCV compatible to different PyTorch and CUDA versions.
### Step 1-2-3: Clone MMSegmentationKN Repository ### Step 1-2-3: Clone kneron-mmsegmentation Repository
```shell ```shell
git clone https://github.com/kneron/MMSegmentationKN.git git clone https://github.com/kneron/kneron-mmsegmentation.git
cd MMSegmentationKN cd kneron-mmsegmentation
``` ```
### Step 1-2-4: Install Required Python Packages for Building and Installing MMSegmentationKN ### Step 1-2-4: Install Required Python Packages for Building and Installing kneron-mmsegmentation
```shell ```shell
pip install -r requirements_kneron.txt pip install -r requirements_kneron.txt
@ -53,7 +53,7 @@ pip install -v -e . # or "python setup.py develop"
# Step 2: Training Models on Standard Datasets # Step 2: Training Models on Standard Datasets
MMSegmentationKN provides many existing and existing semantic segmentation models in [Model Zoo](https://mmsegmentation.readthedocs.io/en/latest/model_zoo.html), and supports several standard datasets like CityScapes, Pascal Context, Coco Stuff, ADE20K, etc. Here we demonstrate how to train *STDC-Seg*, a semantic segmentation algorithm, on *CityScapes*, a well-known semantic segmentation dataset. kneron-mmsegmentation provides many existing and existing semantic segmentation models in [Model Zoo](https://mmsegmentation.readthedocs.io/en/latest/model_zoo.html), and supports several standard datasets like CityScapes, Pascal Context, Coco Stuff, ADE20K, etc. Here we demonstrate how to train *STDC-Seg*, a semantic segmentation algorithm, on *CityScapes*, a well-known semantic segmentation dataset.
## Step 2-1: Download CityScapes Dataset ## Step 2-1: Download CityScapes Dataset
@ -66,7 +66,7 @@ MMSegmentationKN provides many existing and existing semantic segmentation model
## Step 2-2: Dataset Preparation ## Step 2-2: Dataset Preparation
We suggest that you extract the zipped files to somewhere outside the project directory and symlink (`ln`) the dataset root to `MMSegmentationKN/data` so you can use the dataset outside this project, as shown below: We suggest that you extract the zipped files to somewhere outside the project directory and symlink (`ln`) the dataset root to `kneron-mmsegmentation/data` so you can use the dataset outside this project, as shown below:
```shell ```shell
# Replace all "path/to/your" below with where you want to put the dataset! # Replace all "path/to/your" below with where you want to put the dataset!
@ -76,14 +76,14 @@ mkdir -p path/to/your/cityscapes
unzip leftImg8bit_trainvaltest.zip -d path/to/your/cityscapes unzip leftImg8bit_trainvaltest.zip -d path/to/your/cityscapes
unzip gtFine_trainvaltest.zip -d path/to/your/cityscapes unzip gtFine_trainvaltest.zip -d path/to/your/cityscapes
# symlink dataset to MMSegmentationKN/data # where "MMSegmentationKN" is the repository you cloned in step 0-4 # symlink dataset to kneron-mmsegmentation/data # where "kneron-mmsegmentation" is the repository you cloned in step 0-4
mkdir -p MMSegmentationKN/data mkdir -p kneron-mmsegmentation/data
ln -s $(realpath path/to/your/cityscapes) MMSegmentationKN/data ln -s $(realpath path/to/your/cityscapes) kneron-mmsegmentation/data
# Replace all "path/to/your" above with where you want to put the dataset! # Replace all "path/to/your" above with where you want to put the dataset!
``` ```
Then, we need *cityscapesScripts* to preprocess the CityScapes dataset. If you completely followed our [Step 1-2-4](#step-1-2-4-install-required-python-packages-for-building-and-installing-mmsegmentationkn), you should have python package *cityscapesScripts* installed (if no, execute `pip install cityscapesScripts` command). Then, we need *cityscapesScripts* to preprocess the CityScapes dataset. If you completely followed our [Step 1-2-4](#step-1-2-4-install-required-python-packages-for-building-and-installing-kneron-mmsegmentation), you should have python package *cityscapesScripts* installed (if no, execute `pip install cityscapesScripts` command).
```shell ```shell
# Replace "path/to/your" with where you want to put the dataset! # Replace "path/to/your" with where you want to put the dataset!
@ -101,7 +101,7 @@ Progress: 100.0 %
The files inside the dataset folder should be something like: The files inside the dataset folder should be something like:
```plain ```plain
MMSegmentationKN/data/cityscapes kneron-mmsegmentation/data/cityscapes
├── gtFine ├── gtFine
│ ├── test │ ├── test
│ │ ├── ... │ │ ├── ...
@ -138,7 +138,7 @@ Now the dataset should be ready for training.
Short-Term Dense Concatenate Network (STDC network) is a light-weight network structure for convolutional neural network. If we apply this network structure to semantic segmentation task, it's called STDC-Seg. It's first introduced in [Rethinking BiSeNet For Real-time Semantic Segmentation Short-Term Dense Concatenate Network (STDC network) is a light-weight network structure for convolutional neural network. If we apply this network structure to semantic segmentation task, it's called STDC-Seg. It's first introduced in [Rethinking BiSeNet For Real-time Semantic Segmentation
](https://arxiv.org/abs/2104.13188). Please check the paper if you want to know the algorithm details. ](https://arxiv.org/abs/2104.13188). Please check the paper if you want to know the algorithm details.
We only need a configuration file to train a deep learning model in either the original MMSegmentation or MMSegmentationKN. STDC-Seg is provided in the original MMSegmentation repository, but the original configuration file needs some modification due to our hardware limitation so that we can apply the trained model to our Kneron dongle. We only need a configuration file to train a deep learning model in either the original MMSegmentation or kneron-mmsegmentation. STDC-Seg is provided in the original MMSegmentation repository, but the original configuration file needs some modification due to our hardware limitation so that we can apply the trained model to our Kneron dongle.
To make a configuration file compatible with our device, we have to: To make a configuration file compatible with our device, we have to:
@ -147,16 +147,16 @@ To make a configuration file compatible with our device, we have to:
To achieve this, you can modify the `img_scale` in `test_pipeline` and `img_norm_cfg` in the configuration file `configs/_base_/datasets/cityscapes.py`. To achieve this, you can modify the `img_scale` in `test_pipeline` and `img_norm_cfg` in the configuration file `configs/_base_/datasets/cityscapes.py`.
Luckily, here in MMSegmentationKN, we provide a modified STDC-Seg configuration file (`configs/stdc/kn_stdc1_in1k-pre_512x1024_80k_cityscapes.py`) so we can easily apply the trained model to our device. Luckily, here in kneron-mmsegmentation, we provide a modified STDC-Seg configuration file (`configs/stdc/kn_stdc1_in1k-pre_512x1024_80k_cityscapes.py`) so we can easily apply the trained model to our device.
To train STDC-Seg compatible with our device, just execute: To train STDC-Seg compatible with our device, just execute:
```shell ```shell
cd MMSegmentationKN cd kneron-mmsegmentation
python tools/train.py configs/stdc/kn_stdc1_in1k-pre_512x1024_80k_cityscapes.py python tools/train.py configs/stdc/kn_stdc1_in1k-pre_512x1024_80k_cityscapes.py
``` ```
And MMSegmentationKN will generate `work_dirs/kn_stdc1_in1k-pre_512x1024_80k_cityscapes` folder and save the configuration file and all checkpoints there. And kneron-mmsegmentation will generate `work_dirs/kn_stdc1_in1k-pre_512x1024_80k_cityscapes` folder and save the configuration file and all checkpoints there.
# Step 3: Test Trained Model # Step 3: Test Trained Model
`tools/test.py` is a script that generates inference results from test set with our pytorch model and evaluates the results to see if our pytorch model is well trained (if `--eval` argument is given). Note that it's always good to evluate our pytorch model before deploying it. `tools/test.py` is a script that generates inference results from test set with our pytorch model and evaluates the results to see if our pytorch model is well trained (if `--eval` argument is given). Note that it's always good to evluate our pytorch model before deploying it.
@ -209,7 +209,7 @@ Summary:
## Step 4-1: Export ONNX ## Step 4-1: Export ONNX
`tools/pytorch2onnx_kneron.py` is a script provided by MMSegmentationKN to help users to convert our trained pytorch model to ONNX: `tools/pytorch2onnx_kneron.py` is a script provided by kneron-mmsegmentation to help users to convert our trained pytorch model to ONNX:
```shell ```shell
python tools/pytorch2onnx_kneron.py \ python tools/pytorch2onnx_kneron.py \
work_dirs/kn_stdc1_in1k-pre_512x1024_80k_cityscapes/kn_stdc1_in1k-pre_512x1024_80k_cityscapes.py \ work_dirs/kn_stdc1_in1k-pre_512x1024_80k_cityscapes/kn_stdc1_in1k-pre_512x1024_80k_cityscapes.py \
@ -222,7 +222,7 @@ python tools/pytorch2onnx_kneron.py \
## Step 4-2: Verify ONNX ## Step 4-2: Verify ONNX
`tools/deploy_test_kneron.py` is a script provided by MMSegmentationKN to help users to verify if our exported ONNX generates similar outputs with what our PyTorch model does: `tools/deploy_test_kneron.py` is a script provided by kneron-mmsegmentation to help users to verify if our exported ONNX generates similar outputs with what our PyTorch model does:
```shell ```shell
python tools/deploy_test_kneron.py \ python tools/deploy_test_kneron.py \
work_dirs/kn_stdc1_in1k-pre_512x1024_80k_cityscapes/kn_stdc1_in1k-pre_512x1024_80k_cityscapes.py \ work_dirs/kn_stdc1_in1k-pre_512x1024_80k_cityscapes/kn_stdc1_in1k-pre_512x1024_80k_cityscapes.py \
@ -279,7 +279,7 @@ print("\nNpu performance evaluation result:\n" + str(eval_result))
``` ```
### Step 5-6: quantize the onnx model ### Step 5-6: quantize the onnx model
We [sampled 3 images from Cityscapes dataset](https://www.kneron.com/tw/support/education-center/?folder=MMLab/MMSegmentationKN/&download=41) (3 images) as quantization data. To test our quantized model: We [sampled 3 images from Cityscapes dataset](https://www.kneron.com/tw/support/education-center/?folder=OpenMMLab%20Kneron%20Edition/misc/&download=41) (3 images) as quantization data. To test our quantized model:
1. Download the zip file 1. Download the zip file
2. Extract the zip file as a folder named `cityscapes_minitest` 2. Extract the zip file as a folder named `cityscapes_minitest`
3. Put the `cityscapes_minitest` into docker mounted folder (the path in docker container should be `/data1/cityscapes_minitest`) 3. Put the `cityscapes_minitest` into docker mounted folder (the path in docker container should be `/data1/cityscapes_minitest`)
@ -322,10 +322,75 @@ print("\nCompile done. Save Nef file to '" + str(nef_model_path) + "'")
You can find the NEF file at `/data1/batch_compile/models_720.nef`. `models_720.nef` is the final compiled model. You can find the NEF file at `/data1/batch_compile/models_720.nef`. `models_720.nef` is the final compiled model.
# Step 6: Run [NEF](http://doc.kneron.com/docs/#toolchain/manual/#5-nef-workflow) model on KL720 # Step 6: Run [NEF](http://doc.kneron.com/docs/#toolchain/manual/#5-nef-workflow) model on [KL720 USB accelerator](https://www.kneo.ai/products/hardwares/HW2020122500000007/1)
* Check Kneron PLUS official document: * N/A
* Python version:
http://doc.kneron.com/docs/#plus_python/#_top # Step 7 (For Kneron AI Competition 2022): Run [NEF](http://doc.kneron.com/docs/#toolchain/manual/#5-nef-workflow) model on [KL720 USB accelerator](https://www.kneo.ai/products/hardwares/HW2020122500000007/1)
* C version:
http://doc.kneron.com/docs/#plus_c/getting_started/ [WARNING] Don't do this step in toolchain docker enviroment mentioned in Step 5
Recommend you read [Kneron PLUS official document](http://doc.kneron.com/docs/#plus_python/#_top) first.
### Step 7-1: Download and Install PLUS python library(.whl)
* Go to [Kneron education center](https://www.kneron.com/tw/support/education-center/)
* Scroll down to OpenMMLab Kneron Edition table
* Select Kneron Plus v1.13.0 (pre-built python library)
* Your OS version(Ubuntu, Windows, MacOS, Raspberry pi)
* Download KneronPLUS-1.3.0-py3-none-any_{your_os}.whl
* unzip downloaded `KneronPLUS-1.3.0-py3-none-any.whl.zip`
* pip install KneronPLUS-1.3.0-py3-none-any.whl
### Step 7-2: Download STDC example code
* Go to [Kneron education center](https://www.kneron.com/tw/support/education-center/)
* Scroll down to OpenMMLab Kneron Edition table
* Select kneron-mmsegmentation
* Select STDC
* Download stdc_plus_demo.zip
* unzip downloaded `stdc_plus_demo`
### Step 7-3: Test enviroment is ready (require [KL720 USB accelerator](https://www.kneo.ai/products/hardwares/HW2020122500000007/1))
In `stdc_plus_demo`, we provide a stdc example model and image for quick test.
* Plug in [KL720 USB accelerator](https://www.kneo.ai/products/hardwares/HW2020122500000007/1) into your computer USB port
* Go to the stdc_plus_demo folder
```bash
cd /PATH/TO/stdc_plus_demo
```
* Install required library
```bash
pip insall -r requirements.txt
```
* Run example on [KL720 USB accelerator](https://www.kneo.ai/products/hardwares/HW2020122500000007/1)
```python
python KL720DemoGenericInferenceSTDC_BypassHwPreProc.py -nef ./example_stdc_720.nef -img 000000000641.jpg
```
Then you can see the inference result is saved as output_000000000641.jpg in the same folder.
And the expected result of the command above will be something similar to the following text:
```plain
...
[Connect Device]
- Success
[Set Device Timeout]
- Success
[Upload Model]
- Success
[Read Image]
- Success
[Starting Inference Work]
- Starting inference loop 1 times
- .
[Retrieve Inference Node Output ]
- Success
[Output Result Image]
- Output bounding boxes on 'output_000000000641.jpg'
...
```
### Step 7-4: Run your NEF model and your image on [KL720 USB accelerator](https://www.kneo.ai/products/hardwares/HW2020122500000007/1)
Use the same script in previous step, but now we change the input NEF model path and image to yours
```bash
python KL720DemoGenericInferenceSTDC_BypassHwPreProc.py -img /PATH/TO/YOUR_IMAGE.bmp -nef /PATH/TO/YOUR/720_NEF_MODEL.nef
```