865 lines
59 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "TmJAKgDN7bAK",
"outputId": "5b1b1a08-e156-46bd-d71e-b9703f33a97b"
},
"source": [
"<h1 align=\"center\"> Object Detection </h1>\n",
"Object Detection task with fcos model. "
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "h2ozjHL1xQU4"
},
"source": [
"Mount Google Drive and go the the fcos directory. Suppose we have uploaded `ai_training` to Google Drive.\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "fOjRHXV1WLhy",
"outputId": "4c202baf-9956-4a24-e138-258b69da1f02"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Mounted at /content/drive\n"
]
}
],
"source": [
"from google.colab import drive\n",
"drive.mount('/content/drive')"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "F3VlscOpXz4S",
"outputId": "9b4744b1-6e83-4f53-c4c7-6e4377443962"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"/content/drive/MyDrive/ai_training/detection/fcos\n"
]
}
],
"source": [
"cd /content/drive/MyDrive/ai_training/detection/fcos/"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "trtBnLh9UtwQ"
},
"source": [
"# Prerequisites\n",
"- Python >= 3.6\n",
"\n",
"# Installation\n",
"To install the dependencies, run"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "y_qS6eOA3-9G",
"outputId": "946b9bc9-8f73-4483-ca67-aabf49f7e108"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Collecting git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI (from -r requirements-colab.txt (line 15))\n",
" Cloning https://github.com/cocodataset/cocoapi.git to /tmp/pip-req-build-3pvk1zin\n",
" Running command git clone -q https://github.com/cocodataset/cocoapi.git /tmp/pip-req-build-3pvk1zin\n",
"Requirement already satisfied: numpy>=1.18.5 in /usr/local/lib/python3.7/dist-packages (from -r requirements-colab.txt (line 1)) (1.19.5)\n",
"Collecting Keras==2.2.4\n",
"\u001b[?25l Downloading https://files.pythonhosted.org/packages/5e/10/aa32dad071ce52b5502266b5c659451cfd6ffcbf14e6c8c4f16c0ff5aaab/Keras-2.2.4-py2.py3-none-any.whl (312kB)\n",
"\u001b[K |████████████████████████████████| 317kB 7.5MB/s \n",
"\u001b[?25hCollecting keras-resnet==0.2.0\n",
" Downloading https://files.pythonhosted.org/packages/76/d4/a35cbd07381139dda4db42c81b88c59254faac026109022727b45b31bcad/keras-resnet-0.2.0.tar.gz\n",
"Collecting opencv-contrib-python==3.4.2.17\n",
"\u001b[?25l Downloading https://files.pythonhosted.org/packages/12/32/8d32d40cd35e61c80cb112ef5e8dbdcfbb06124f36a765df98517a12e753/opencv_contrib_python-3.4.2.17-cp37-cp37m-manylinux1_x86_64.whl (30.6MB)\n",
"\u001b[K |████████████████████████████████| 30.6MB 96kB/s \n",
"\u001b[?25hCollecting opencv-python==3.4.2.17\n",
"\u001b[?25l Downloading https://files.pythonhosted.org/packages/8f/8f/a5d2fa3a3309c4e4aa28eb989d81a95b57c63406b4d439758a1a0a810c77/opencv_python-3.4.2.17-cp37-cp37m-manylinux1_x86_64.whl (25.0MB)\n",
"\u001b[K |████████████████████████████████| 25.0MB 123kB/s \n",
"\u001b[?25hRequirement already satisfied: Pillow>=6.2.0 in /usr/local/lib/python3.7/dist-packages (from -r requirements-colab.txt (line 6)) (7.1.2)\n",
"Collecting h5py==2.10.0\n",
"\u001b[?25l Downloading https://files.pythonhosted.org/packages/3f/c0/abde58b837e066bca19a3f7332d9d0493521d7dd6b48248451a9e3fe2214/h5py-2.10.0-cp37-cp37m-manylinux1_x86_64.whl (2.9MB)\n",
"\u001b[K |████████████████████████████████| 2.9MB 44.8MB/s \n",
"\u001b[?25hCollecting PyYAML>=5.3.1\n",
"\u001b[?25l Downloading https://files.pythonhosted.org/packages/7a/a5/393c087efdc78091afa2af9f1378762f9821c9c1d7a22c5753fb5ac5f97a/PyYAML-5.4.1-cp37-cp37m-manylinux1_x86_64.whl (636kB)\n",
"\u001b[K |████████████████████████████████| 645kB 36.5MB/s \n",
"\u001b[?25hCollecting onnx==1.6.0\n",
"\u001b[?25l Downloading https://files.pythonhosted.org/packages/ec/2b/6802531b7f87599781bbfcfefd9d0861d849b339ee5156515838829417e6/onnx-1.6.0-cp37-cp37m-manylinux1_x86_64.whl (4.8MB)\n",
"\u001b[K |████████████████████████████████| 4.8MB 39.5MB/s \n",
"\u001b[?25hCollecting onnxruntime\n",
"\u001b[?25l Downloading https://files.pythonhosted.org/packages/0c/f0/666d6e3ceaa276a54e728f9972732e058544cbb6a3e1a778a8d6f87132c1/onnxruntime-1.7.0-cp37-cp37m-manylinux2014_x86_64.whl (4.1MB)\n",
"\u001b[K |████████████████████████████████| 4.1MB 28.9MB/s \n",
"\u001b[?25hRequirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from -r requirements-colab.txt (line 11)) (1.1.5)\n",
"Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from -r requirements-colab.txt (line 12)) (3.2.2)\n",
"Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from -r requirements-colab.txt (line 13)) (4.41.1)\n",
"Requirement already satisfied: progressbar2 in /usr/local/lib/python3.7/dist-packages (from -r requirements-colab.txt (line 14)) (3.38.0)\n",
"Requirement already satisfied: setuptools>=18.0 in /usr/local/lib/python3.7/dist-packages (from pycocotools==2.0->-r requirements-colab.txt (line 15)) (56.1.0)\n",
"Requirement already satisfied: cython>=0.27.3 in /usr/local/lib/python3.7/dist-packages (from pycocotools==2.0->-r requirements-colab.txt (line 15)) (0.29.23)\n",
"Collecting keras-applications>=1.0.6\n",
"\u001b[?25l Downloading https://files.pythonhosted.org/packages/71/e3/19762fdfc62877ae9102edf6342d71b28fbfd9dea3d2f96a882ce099b03f/Keras_Applications-1.0.8-py3-none-any.whl (50kB)\n",
"\u001b[K |████████████████████████████████| 51kB 7.9MB/s \n",
"\u001b[?25hRequirement already satisfied: scipy>=0.14 in /usr/local/lib/python3.7/dist-packages (from Keras==2.2.4->-r requirements-colab.txt (line 2)) (1.4.1)\n",
"Requirement already satisfied: keras-preprocessing>=1.0.5 in /usr/local/lib/python3.7/dist-packages (from Keras==2.2.4->-r requirements-colab.txt (line 2)) (1.1.2)\n",
"Requirement already satisfied: six>=1.9.0 in /usr/local/lib/python3.7/dist-packages (from Keras==2.2.4->-r requirements-colab.txt (line 2)) (1.15.0)\n",
"Requirement already satisfied: typing-extensions>=3.6.2.1 in /usr/local/lib/python3.7/dist-packages (from onnx==1.6.0->-r requirements-colab.txt (line 9)) (3.7.4.3)\n",
"Requirement already satisfied: protobuf in /usr/local/lib/python3.7/dist-packages (from onnx==1.6.0->-r requirements-colab.txt (line 9)) (3.12.4)\n",
"Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas->-r requirements-colab.txt (line 11)) (2.8.1)\n",
"Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas->-r requirements-colab.txt (line 11)) (2018.9)\n",
"Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->-r requirements-colab.txt (line 12)) (1.3.1)\n",
"Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->-r requirements-colab.txt (line 12)) (2.4.7)\n",
"Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->-r requirements-colab.txt (line 12)) (0.10.0)\n",
"Requirement already satisfied: python-utils>=2.3.0 in /usr/local/lib/python3.7/dist-packages (from progressbar2->-r requirements-colab.txt (line 14)) (2.5.6)\n",
"Building wheels for collected packages: keras-resnet, pycocotools\n",
" Building wheel for keras-resnet (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
" Created wheel for keras-resnet: filename=keras_resnet-0.2.0-py2.py3-none-any.whl size=20486 sha256=5972a496d0afaf240ff78ef9458cd88eca706c7e0055b9b8c8e86b1d25c81897\n",
" Stored in directory: /root/.cache/pip/wheels/5f/09/a5/497a30fd9ad9964e98a1254d1e164bcd1b8a5eda36197ecb3c\n",
" Building wheel for pycocotools (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
" Created wheel for pycocotools: filename=pycocotools-2.0-cp37-cp37m-linux_x86_64.whl size=263915 sha256=0f5cd73685e7d4cb5952b99e7b7e579220c1a0a6104fdf11eb317a02f268c2c2\n",
" Stored in directory: /tmp/pip-ephem-wheel-cache-2verf4pu/wheels/90/51/41/646daf401c3bc408ff10de34ec76587a9b3ebfac8d21ca5c3a\n",
"Successfully built keras-resnet pycocotools\n",
"\u001b[31mERROR: tensorflow 2.5.0 has requirement h5py~=3.1.0, but you'll have h5py 2.10.0 which is incompatible.\u001b[0m\n",
"\u001b[31mERROR: albumentations 0.1.12 has requirement imgaug<0.2.7,>=0.2.5, but you'll have imgaug 0.2.9 which is incompatible.\u001b[0m\n",
"Installing collected packages: h5py, keras-applications, PyYAML, Keras, keras-resnet, opencv-contrib-python, opencv-python, onnx, onnxruntime, pycocotools\n",
" Found existing installation: h5py 3.1.0\n",
" Uninstalling h5py-3.1.0:\n",
" Successfully uninstalled h5py-3.1.0\n",
" Found existing installation: PyYAML 3.13\n",
" Uninstalling PyYAML-3.13:\n",
" Successfully uninstalled PyYAML-3.13\n",
" Found existing installation: Keras 2.4.3\n",
" Uninstalling Keras-2.4.3:\n",
" Successfully uninstalled Keras-2.4.3\n",
" Found existing installation: opencv-contrib-python 4.1.2.30\n",
" Uninstalling opencv-contrib-python-4.1.2.30:\n",
" Successfully uninstalled opencv-contrib-python-4.1.2.30\n",
" Found existing installation: opencv-python 4.1.2.30\n",
" Uninstalling opencv-python-4.1.2.30:\n",
" Successfully uninstalled opencv-python-4.1.2.30\n",
" Found existing installation: pycocotools 2.0.2\n",
" Uninstalling pycocotools-2.0.2:\n",
" Successfully uninstalled pycocotools-2.0.2\n",
"Successfully installed Keras-2.2.4 PyYAML-5.4.1 h5py-2.10.0 keras-applications-1.0.8 keras-resnet-0.2.0 onnx-1.6.0 onnxruntime-1.7.0 opencv-contrib-python-3.4.2.17 opencv-python-3.4.2.17 pycocotools-2.0\n",
"running build_ext\n",
"skipping 'utils/compute_overlap.c' Cython extension (up-to-date)\n",
"building 'utils.compute_overlap' extension\n",
"x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-OGiuun/python3.7-3.7.10=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-OGiuun/python3.7-3.7.10=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -I/usr/local/lib/python3.7/dist-packages/numpy/core/include -c utils/compute_overlap.c -o build/temp.linux-x86_64-3.7/utils/compute_overlap.o\n",
"In file included from \u001b[01m\u001b[K/usr/local/lib/python3.7/dist-packages/numpy/core/include/numpy/ndarraytypes.h:1822:0\u001b[m\u001b[K,\n",
" from \u001b[01m\u001b[K/usr/local/lib/python3.7/dist-packages/numpy/core/include/numpy/ndarrayobject.h:12\u001b[m\u001b[K,\n",
" from \u001b[01m\u001b[K/usr/local/lib/python3.7/dist-packages/numpy/core/include/numpy/arrayobject.h:4\u001b[m\u001b[K,\n",
" from \u001b[01m\u001b[Kutils/compute_overlap.c:612\u001b[m\u001b[K:\n",
"\u001b[01m\u001b[K/usr/local/lib/python3.7/dist-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:17:2:\u001b[m\u001b[K \u001b[01;35m\u001b[Kwarning: \u001b[m\u001b[K#warning \"Using deprecated NumPy API, disable it with \" \"#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION\" [\u001b[01;35m\u001b[K-Wcpp\u001b[m\u001b[K]\n",
" #\u001b[01;35m\u001b[Kwarning\u001b[m\u001b[K \"Using deprecated NumPy API, disable it with \" \\\n",
" \u001b[01;35m\u001b[K^~~~~~~\u001b[m\u001b[K\n",
"x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fdebug-prefix-map=/build/python3.7-OGiuun/python3.7-3.7.10=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.7/utils/compute_overlap.o -o build/lib.linux-x86_64-3.7/utils/compute_overlap.cpython-37m-x86_64-linux-gnu.so\n",
"copying build/lib.linux-x86_64-3.7/utils/compute_overlap.cpython-37m-x86_64-linux-gnu.so -> utils\n"
]
}
],
"source": [
"!pip install -r requirements-colab.txt\n",
"!python setup.py build_ext --inplace"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "or8JlMIqZPXg",
"outputId": "bfb7305e-b420-432f-c72d-2d288a50c893"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"TensorFlow 1.x selected.\n",
"True\n",
"1.15.2\n"
]
}
],
"source": [
"%tensorflow_version 1.x\n",
"import tensorflow as tf\n",
"print(tf.test.is_gpu_available()) # should get 'True'\n",
"print(tf.__version__) # should get '1.15.2'"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "7Pl_eqa_UtwS"
},
"source": [
"# Dataset & Preparation\n",
"For this tutorial, we will use COCO128 dataset, located under `detection/yolov5/coco128`. Here is the data yaml file for COCO128:\n"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "bJAYMbCLUtwS",
"outputId": "bbf2bc5f-ca6f-4467-b53b-0105c3a828d1"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"train: ../yolov5/coco128/images/train2017 # 128 images\n",
"val: ../yolov5/coco128/images/train2017 # 128 images\n",
"\n",
"# number of classes\n",
"nc: 80\n",
"\n",
"# type of dataset\n",
"dataset_type: csv\n",
"\n",
"# class names\n",
"names: ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',\n",
" 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',\n",
" 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',\n",
" 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',\n",
" 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',\n",
" 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',\n",
" 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',\n",
" 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',\n",
" 'hair drier', 'toothbrush']\n"
]
}
],
"source": [
"!cat data/coco128.yaml"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "YYg0TAgjUtwT"
},
"source": [
"# Train with COCO128\n",
"\n",
"Let's finetune a pretrained model on COCO128 custom dataset (located under yolov5 folder). The pretrained model we used here is the model with backbone Darknet53s, pan FPN type, trained on COCO dataset. We download the pretrained model from [Model_Zoo](https://github.com/kneron/Model_Zoo/tree/main/detection/fcos). Since COCO128 is small, we choose to freeze the pretrained model backbone. Execute commands:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "RnPaXtRsKUzq",
"outputId": "4733fcbe-f58b-469c-d38c-8acd3d8592f4"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"--2021-05-20 23:44:26-- https://raw.githubusercontent.com/kneron/Model_Zoo/main/detection/fcos/coco_yolov5_pan_3_11_1.9920_0.4832.h5\n",
"Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...\n",
"Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.\n",
"HTTP request sent, awaiting response... 200 OK\n",
"Length: 34583584 (33M) [application/octet-stream]\n",
"Saving to: coco_yolov5_pan_3_11_1.9920_0.4832.h5\n",
"\n",
"coco_yolov5_pan_3_1 100%[===================>] 32.98M 63.1MB/s in 0.5s \n",
"\n",
"2021-05-20 23:44:27 (63.1 MB/s) - coco_yolov5_pan_3_11_1.9920_0.4832.h5 saved [34583584/34583584]\n",
"\n"
]
}
],
"source": [
"!wget https://raw.githubusercontent.com/kneron/Model_Zoo/main/detection/fcos/coco_yolov5_pan_3_11_1.9920_0.4832.h5"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "eNiDzZ3Rip7U",
"outputId": "072f72e6-0928-4fb5-b4ae-bf0d7ef069c8"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Using TensorFlow backend.\n",
"{'data': 'data/coco128.yaml', 'snapshot': 'coco_yolov5_pan_3_11_1.9920_0.4832.h5', 'backbone': 'darknet53s', 'fpn': 'pan', 'reg_func': 'linear', 'stage': 3, 'head_type': 'simple', 'centerness_pos': 'reg', 'batch_size': 8, 'gpu': '0', 'epochs': 2, 'steps': 5, 'lr': 0.0001, 'snapshot_path': 'snapshots/exp', 'freeze_backbone': True, 'input_size': 512, 'compute_val_loss': False}\n",
"WARNING:tensorflow:From train.py:48: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.\n",
"\n",
"WARNING:tensorflow:From train.py:50: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.\n",
"\n",
"2021-05-29 01:34:28.167297: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2199995000 Hz\n",
"2021-05-29 01:34:28.167471: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x562e6264b100 initialized for platform Host (this does not guarantee that XLA will be used). Devices:\n",
"2021-05-29 01:34:28.167501: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version\n",
"2021-05-29 01:34:28.169166: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1\n",
"2021-05-29 01:34:28.347740: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2021-05-29 01:34:28.348443: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x562e6264ad80 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\n",
"2021-05-29 01:34:28.348477: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla T4, Compute Capability 7.5\n",
"2021-05-29 01:34:28.348663: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2021-05-29 01:34:28.349225: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 0 with properties: \n",
"name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59\n",
"pciBusID: 0000:00:04.0\n",
"2021-05-29 01:34:28.349563: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n",
"2021-05-29 01:34:28.351214: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n",
"2021-05-29 01:34:28.352707: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n",
"2021-05-29 01:34:28.353074: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n",
"2021-05-29 01:34:28.354542: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n",
"2021-05-29 01:34:28.355209: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n",
"2021-05-29 01:34:28.358005: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n",
"2021-05-29 01:34:28.358110: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2021-05-29 01:34:28.358695: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2021-05-29 01:34:28.359207: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1767] Adding visible gpu devices: 0\n",
"2021-05-29 01:34:28.359261: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n",
"2021-05-29 01:34:28.360293: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1180] Device interconnect StreamExecutor with strength 1 edge matrix:\n",
"2021-05-29 01:34:28.360335: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1186] 0 \n",
"2021-05-29 01:34:28.360346: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 0: N \n",
"2021-05-29 01:34:28.360462: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2021-05-29 01:34:28.361118: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2021-05-29 01:34:28.361653: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14161 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5)\n",
"anchor parameters:\n",
"{'strides': [8, 16, 32], 'interest_sizes': [[-1, 64], [64, 128], [128, 100000000.0]]}\n",
"anchor parameters:\n",
"{'strides': [8, 16, 32], 'interest_sizes': [[-1, 64], [64, 128], [128, 100000000.0]]}\n",
"WARNING:tensorflow:From /tensorflow-1.15.2/python3.7/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"If using Keras pass *_constraint arguments to layers.\n",
"WARNING:tensorflow:From /tensorflow-1.15.2/python3.7/keras/backend/tensorflow_backend.py:4070: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.\n",
"\n",
"number of features 3\n",
"Tensor(\"P3_output_256/BiasAdd:0\", shape=(?, 64, 64, 256), dtype=float32)\n",
"Tensor(\"P4_output_256/BiasAdd:0\", shape=(?, 32, 32, 256), dtype=float32)\n",
"Tensor(\"P5_output_256/BiasAdd:0\", shape=(?, 16, 16, 256), dtype=float32)\n",
"training model output:\n",
"Tensor(\"regression/concat:0\", shape=(?, ?, 4), dtype=float32)\n",
"Tensor(\"classification/concat:0\", shape=(?, ?, 80), dtype=float32)\n",
"Tensor(\"centerness/concat:0\", shape=(?, ?, 1), dtype=float32)\n",
"WARNING:tensorflow:From /content/drive/My Drive/ai_training/detection/fcos/layers.py:275: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Use tf.where in 2.0, which has the same broadcast rule as np.where\n",
"WARNING:tensorflow:From /tensorflow-1.15.2/python3.7/keras/backend/tensorflow_backend.py:422: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.\n",
"\n",
"Epoch 1/2\n",
"2021-05-29 01:35:09.217093: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n",
"2021-05-29 01:35:14.234967: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n",
"5/5 [==============================] - 18s 4s/step - loss: 1.4581 - regression_loss: 0.3108 - classification_loss: 0.3506 - centerness_loss: 0.5974\n",
"Running network: 100% (126 of 126) |######| Elapsed Time: 0:00:11 Time: 0:00:11\n",
"Parsing annotations: 100% (126 of 126) |##| Elapsed Time: 0:00:00 Time: 0:00:00\n",
"Computing mAP: 100% (80 of 80) |##########| Elapsed Time: 0:00:00 Time: 0:00:00\n",
"254 instances of class person with average precision: 0.7037\n",
"6 instances of class bicycle with average precision: 0.2236\n",
"46 instances of class car with average precision: 0.2259\n",
"5 instances of class motorcycle with average precision: 0.8583\n",
"6 instances of class airplane with average precision: 0.9762\n",
"7 instances of class bus with average precision: 0.3786\n",
"3 instances of class train with average precision: 0.8667\n",
"12 instances of class truck with average precision: 0.2245\n",
"6 instances of class boat with average precision: 0.3429\n",
"14 instances of class traffic light with average precision: 0.1582\n",
"0 instances of class fire hydrant with average precision: 0.0000\n",
"2 instances of class stop sign with average precision: 0.8333\n",
"0 instances of class parking meter with average precision: 0.0000\n",
"9 instances of class bench with average precision: 0.3406\n",
"16 instances of class bird with average precision: 0.8330\n",
"4 instances of class cat with average precision: 1.0000\n",
"9 instances of class dog with average precision: 0.7578\n",
"2 instances of class horse with average precision: 1.0000\n",
"0 instances of class sheep with average precision: 0.0000\n",
"0 instances of class cow with average precision: 0.0000\n",
"17 instances of class elephant with average precision: 0.7782\n",
"1 instances of class bear with average precision: 1.0000\n",
"4 instances of class zebra with average precision: 1.0000\n",
"9 instances of class giraffe with average precision: 0.8358\n",
"6 instances of class backpack with average precision: 0.5258\n",
"18 instances of class umbrella with average precision: 0.5125\n",
"19 instances of class handbag with average precision: 0.1124\n",
"7 instances of class tie with average precision: 0.4578\n",
"4 instances of class suitcase with average precision: 0.8500\n",
"5 instances of class frisbee with average precision: 0.7600\n",
"1 instances of class skis with average precision: 1.0000\n",
"7 instances of class snowboard with average precision: 0.6607\n",
"6 instances of class sports ball with average precision: 0.1818\n",
"10 instances of class kite with average precision: 0.2022\n",
"4 instances of class baseball bat with average precision: 0.1434\n",
"7 instances of class baseball glove with average precision: 0.2857\n",
"5 instances of class skateboard with average precision: 0.4400\n",
"0 instances of class surfboard with average precision: 0.0000\n",
"7 instances of class tennis racket with average precision: 0.4813\n",
"18 instances of class bottle with average precision: 0.2207\n",
"16 instances of class wine glass with average precision: 0.4968\n",
"36 instances of class cup with average precision: 0.3958\n",
"6 instances of class fork with average precision: 0.0972\n",
"16 instances of class knife with average precision: 0.4821\n",
"22 instances of class spoon with average precision: 0.2987\n",
"28 instances of class bowl with average precision: 0.5229\n",
"1 instances of class banana with average precision: 0.1250\n",
"0 instances of class apple with average precision: 0.0000\n",
"2 instances of class sandwich with average precision: 0.5000\n",
"4 instances of class orange with average precision: 0.4250\n",
"11 instances of class broccoli with average precision: 0.1620\n",
"24 instances of class carrot with average precision: 0.5436\n",
"2 instances of class hot dog with average precision: 1.0000\n",
"5 instances of class pizza with average precision: 0.9111\n",
"14 instances of class donut with average precision: 0.8175\n",
"4 instances of class cake with average precision: 0.9500\n",
"35 instances of class chair with average precision: 0.4138\n",
"6 instances of class couch with average precision: 0.4739\n",
"14 instances of class potted plant with average precision: 0.5842\n",
"3 instances of class bed with average precision: 0.7143\n",
"13 instances of class dining table with average precision: 0.4123\n",
"2 instances of class toilet with average precision: 0.0328\n",
"2 instances of class tv with average precision: 0.6111\n",
"3 instances of class laptop with average precision: 0.4167\n",
"2 instances of class mouse with average precision: 0.0000\n",
"8 instances of class remote with average precision: 0.5391\n",
"0 instances of class keyboard with average precision: 0.0000\n",
"8 instances of class cell phone with average precision: 0.0411\n",
"3 instances of class microwave with average precision: 0.8333\n",
"5 instances of class oven with average precision: 0.2905\n",
"0 instances of class toaster with average precision: 0.0000\n",
"6 instances of class sink with average precision: 0.1075\n",
"5 instances of class refrigerator with average precision: 0.8769\n",
"29 instances of class book with average precision: 0.1053\n",
"9 instances of class clock with average precision: 0.8667\n",
"2 instances of class vase with average precision: 0.6667\n",
"1 instances of class scissors with average precision: 0.0000\n",
"21 instances of class teddy bear with average precision: 0.5111\n",
"0 instances of class hair drier with average precision: 0.0000\n",
"5 instances of class toothbrush with average precision: 0.4805\n",
"mAP: 0.5194\n",
"\n",
"Epoch 00001: mAP improved from -inf to 0.51939, saving model to snapshots/exp/csv_darknet53s_pan_3_01.h5\n",
"Epoch 2/2\n",
"5/5 [==============================] - 1s 153ms/step - loss: 1.4387 - regression_loss: 0.3102 - classification_loss: 0.3309 - centerness_loss: 0.5992\n",
"Running network: 100% (126 of 126) |######| Elapsed Time: 0:00:06 Time: 0:00:06\n",
"Parsing annotations: 100% (126 of 126) |##| Elapsed Time: 0:00:00 Time: 0:00:00\n",
"Computing mAP: 100% (80 of 80) |##########| Elapsed Time: 0:00:00 Time: 0:00:00\n",
"254 instances of class person with average precision: 0.6062\n",
"6 instances of class bicycle with average precision: 0.2555\n",
"46 instances of class car with average precision: 0.1909\n",
"5 instances of class motorcycle with average precision: 0.4524\n",
"6 instances of class airplane with average precision: 0.6565\n",
"7 instances of class bus with average precision: 0.6508\n",
"3 instances of class train with average precision: 0.6667\n",
"12 instances of class truck with average precision: 0.2903\n",
"6 instances of class boat with average precision: 0.4167\n",
"14 instances of class traffic light with average precision: 0.2005\n",
"0 instances of class fire hydrant with average precision: 0.0000\n",
"2 instances of class stop sign with average precision: 0.8333\n",
"0 instances of class parking meter with average precision: 0.0000\n",
"9 instances of class bench with average precision: 0.0889\n",
"16 instances of class bird with average precision: 0.8400\n",
"4 instances of class cat with average precision: 0.1979\n",
"9 instances of class dog with average precision: 0.7407\n",
"2 instances of class horse with average precision: 0.8333\n",
"0 instances of class sheep with average precision: 0.0000\n",
"0 instances of class cow with average precision: 0.0000\n",
"17 instances of class elephant with average precision: 0.7287\n",
"1 instances of class bear with average precision: 1.0000\n",
"4 instances of class zebra with average precision: 0.9000\n",
"9 instances of class giraffe with average precision: 0.6721\n",
"6 instances of class backpack with average precision: 0.5277\n",
"18 instances of class umbrella with average precision: 0.7275\n",
"19 instances of class handbag with average precision: 0.0906\n",
"7 instances of class tie with average precision: 0.4979\n",
"4 instances of class suitcase with average precision: 0.6500\n",
"5 instances of class frisbee with average precision: 0.7600\n",
"1 instances of class skis with average precision: 1.0000\n",
"7 instances of class snowboard with average precision: 0.5952\n",
"6 instances of class sports ball with average precision: 0.1667\n",
"10 instances of class kite with average precision: 0.1232\n",
"4 instances of class baseball bat with average precision: 0.0863\n",
"7 instances of class baseball glove with average precision: 0.2857\n",
"5 instances of class skateboard with average precision: 0.3462\n",
"0 instances of class surfboard with average precision: 0.0000\n",
"7 instances of class tennis racket with average precision: 0.4294\n",
"18 instances of class bottle with average precision: 0.2222\n",
"16 instances of class wine glass with average precision: 0.4224\n",
"36 instances of class cup with average precision: 0.3750\n",
"6 instances of class fork with average precision: 0.0328\n",
"16 instances of class knife with average precision: 0.4594\n",
"22 instances of class spoon with average precision: 0.3677\n",
"28 instances of class bowl with average precision: 0.5004\n",
"1 instances of class banana with average precision: 0.5000\n",
"0 instances of class apple with average precision: 0.0000\n",
"2 instances of class sandwich with average precision: 0.5182\n",
"4 instances of class orange with average precision: 0.5192\n",
"11 instances of class broccoli with average precision: 0.1890\n",
"24 instances of class carrot with average precision: 0.4162\n",
"2 instances of class hot dog with average precision: 1.0000\n",
"5 instances of class pizza with average precision: 0.7804\n",
"14 instances of class donut with average precision: 0.8796\n",
"4 instances of class cake with average precision: 0.9500\n",
"35 instances of class chair with average precision: 0.4638\n",
"6 instances of class couch with average precision: 0.3741\n",
"14 instances of class potted plant with average precision: 0.5417\n",
"3 instances of class bed with average precision: 0.6111\n",
"13 instances of class dining table with average precision: 0.1968\n",
"2 instances of class toilet with average precision: 0.0238\n",
"2 instances of class tv with average precision: 0.3929\n",
"3 instances of class laptop with average precision: 0.3333\n",
"2 instances of class mouse with average precision: 0.0000\n",
"8 instances of class remote with average precision: 0.5240\n",
"0 instances of class keyboard with average precision: 0.0000\n",
"8 instances of class cell phone with average precision: 0.0364\n",
"3 instances of class microwave with average precision: 1.0000\n",
"5 instances of class oven with average precision: 0.3229\n",
"0 instances of class toaster with average precision: 0.0000\n",
"6 instances of class sink with average precision: 0.1103\n",
"5 instances of class refrigerator with average precision: 0.8333\n",
"29 instances of class book with average precision: 0.1352\n",
"9 instances of class clock with average precision: 0.8185\n",
"2 instances of class vase with average precision: 0.4500\n",
"1 instances of class scissors with average precision: 0.0000\n",
"21 instances of class teddy bear with average precision: 0.4146\n",
"0 instances of class hair drier with average precision: 0.0000\n",
"5 instances of class toothbrush with average precision: 0.5000\n",
"mAP: 0.4750\n",
"\n",
"Epoch 00002: mAP did not improve from 0.51939\n"
]
}
],
"source": [
"!python train.py --backbone darknet53s --fpn pan --snapshot coco_yolov5_pan_3_11_1.9920_0.4832.h5 --freeze-backbone --batch-size 8 --gpu 0 --steps 5 --epochs 2 --snapshot-path snapshots/exp --data data/coco128.yaml\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ikEIKljrfYWo"
},
"source": [
"As we can see from the messages, the traininng losses and time are printed and the validation mAP were reported in each epoch. The trained model will be saved to `snapshots/exp` folder. "
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "QWtMdoD4UtwU"
},
"source": [
"# Inference \n",
"In this section, we will go through an example of using a trained network for inference. That is, we'll pass an image into the network and detect and classify the object in the image. We will use the function `inference.py` that takes an image and a model, then returns the detection information. The output format is a list of list, [[l,t,w,h,score,class_id], [l,t,w,h,score,class_id] ...]. We can also draw the bbox on the image if the save path is given. \n",
"\n",
"Let's run pretrained network on a screenshot from a movie, with the following code:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "srE2L76ifYg3",
"outputId": "9a3269bf-b5fa-49a7-8604-2850a18c4a73"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Using TensorFlow backend.\n",
"{'img_path': 'tutorial/demo/fcos_demo.jpg', 'class_id_path': 'utils/coco_id_class_map.json', 'gpu': 0, 'snapshot': 'snapshots/exp/csv_darknet53s_pan_3_01.h5', 'input_shape': [512, 512], 'max_objects': 100, 'score_thres': 0.6, 'iou_thres': 0.5, 'nms': 1, 'save_path': 'tutorial/demo/out.jpg', 'save_preds_path': None}\n",
"WARNING:tensorflow:From /tensorflow-1.15.2/python3.7/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"If using Keras pass *_constraint arguments to layers.\n",
"2021-05-29 01:36:20.958229: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1\n",
"2021-05-29 01:36:20.963715: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2021-05-29 01:36:20.964291: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 0 with properties: \n",
"name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59\n",
"pciBusID: 0000:00:04.0\n",
"2021-05-29 01:36:20.964557: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n",
"2021-05-29 01:36:20.967977: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n",
"2021-05-29 01:36:20.969529: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n",
"2021-05-29 01:36:20.969869: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n",
"2021-05-29 01:36:20.978841: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n",
"2021-05-29 01:36:20.989136: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n",
"2021-05-29 01:36:20.998423: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n",
"2021-05-29 01:36:20.998620: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2021-05-29 01:36:20.999229: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2021-05-29 01:36:20.999727: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1767] Adding visible gpu devices: 0\n",
"2021-05-29 01:36:21.005562: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2199995000 Hz\n",
"2021-05-29 01:36:21.005752: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55668b7e8d80 initialized for platform Host (this does not guarantee that XLA will be used). Devices:\n",
"2021-05-29 01:36:21.005780: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version\n",
"2021-05-29 01:36:21.196452: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2021-05-29 01:36:21.197198: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55668b7e8f40 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\n",
"2021-05-29 01:36:21.197232: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla T4, Compute Capability 7.5\n",
"2021-05-29 01:36:21.197398: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2021-05-29 01:36:21.197959: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 0 with properties: \n",
"name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59\n",
"pciBusID: 0000:00:04.0\n",
"2021-05-29 01:36:21.198026: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n",
"2021-05-29 01:36:21.198053: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n",
"2021-05-29 01:36:21.198074: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n",
"2021-05-29 01:36:21.198095: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n",
"2021-05-29 01:36:21.198118: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n",
"2021-05-29 01:36:21.198143: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n",
"2021-05-29 01:36:21.198161: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n",
"2021-05-29 01:36:21.198231: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2021-05-29 01:36:21.198789: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2021-05-29 01:36:21.199314: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1767] Adding visible gpu devices: 0\n",
"2021-05-29 01:36:21.199385: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n",
"2021-05-29 01:36:21.200405: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1180] Device interconnect StreamExecutor with strength 1 edge matrix:\n",
"2021-05-29 01:36:21.200433: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1186] 0 \n",
"2021-05-29 01:36:21.200447: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 0: N \n",
"2021-05-29 01:36:21.200584: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2021-05-29 01:36:21.201235: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2021-05-29 01:36:21.201806: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.\n",
"2021-05-29 01:36:21.201844: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14161 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5)\n",
"WARNING:tensorflow:From /tensorflow-1.15.2/python3.7/keras/backend/tensorflow_backend.py:4070: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.\n",
"\n",
"/tensorflow-1.15.2/python3.7/keras/engine/saving.py:341: UserWarning: No training configuration found in save file: the model was *not* compiled. Compile it manually.\n",
" warnings.warn('No training configuration found in save file: '\n",
"WARNING:tensorflow:From /tensorflow-1.15.2/python3.7/keras/backend/tensorflow_backend.py:422: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.\n",
"\n",
"2021-05-29 01:36:34.711653: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n",
"2021-05-29 01:36:35.850235: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n",
"[[909.0585994720459, 158.15237045288086, 305.4385471343994, 764.1001510620117, 0.7487878203392029, 0.0], [641.4619374275208, 202.17178344726562, 264.6464467048645, 726.476469039917, 0.7456897497177124, 0.0]]\n"
]
}
],
"source": [
"!python inference.py --snapshot snapshots/exp/csv_darknet53s_pan_3_01.h5 --score-thres 0.6 --gpu 0 --class-id-path utils/coco_id_class_map.json --img-path tutorial/demo/fcos_demo.jpg --save-path tutorial/demo/out.jpg"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Chub7jmYUtwW"
},
"source": [
"We get the inference result saved in a json file located under the same folder as the input image, `./tutorial/demo/fcos_demo_preds.json`."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "ANKIp3fnUtwW",
"outputId": "03f72681-0683-4f70-8433-e93b31b431bc"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\"img_path\": \"tutorial/demo/fcos_demo.jpg\", \"0_0\": [[909.0585994720459, 158.15237045288086, 305.4385471343994, 764.1001510620117, 0.7487878203392029, 0.0], [641.4619374275208, 202.17178344726562, 264.6464467048645, 726.476469039917, 0.7456897497177124, 0.0]]}"
]
}
],
"source": [
"!cat ./tutorial/demo/fcos_demo_preds.json"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5MGOu131UtwX"
},
"source": [
"# Convert to ONNX\n",
"\n",
"Pull the latest [ONNX converter](https://github.com/kneron/ONNX_Convertor/tree/master/keras-onnx) from github. You may read the latest document from Github for converting ONNX model. Execute commands in the folder `ONNX_Convertor/keras-onnx`:\n",
"(reference: https://github.com/kneron/ONNX_Convertor/tree/master/keras-onnx)\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "px8QpaxszYYk",
"outputId": "62cb7bcc-e132-4811-c914-fc150e91d2bd"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Cloning into 'ONNX_Convertor'...\n",
"remote: Enumerating objects: 1174, done.\u001b[K\n",
"remote: Counting objects: 100% (154/154), done.\u001b[K\n",
"remote: Compressing objects: 100% (114/114), done.\u001b[K\n",
"remote: Total 1174 (delta 84), reused 78 (delta 39), pack-reused 1020\u001b[K\n",
"Receiving objects: 100% (1174/1174), 5.85 MiB | 15.10 MiB/s, done.\n",
"Resolving deltas: 100% (784/784), done.\n",
"Checking out files: 100% (225/225), done.\n"
]
}
],
"source": [
"!git clone https://github.com/kneron/ONNX_Convertor.git"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "efg1bogHUtwX",
"outputId": "9cb03535-995c-4f34-dcef-ca0f2fce1441"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Using TensorFlow backend.\n",
"WARNING:tensorflow:From /tensorflow-1.15.2/python3.7/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"If using Keras pass *_constraint arguments to layers.\n",
"2021-05-29 01:36:44.716898: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1\n",
"2021-05-29 01:36:44.722177: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2021-05-29 01:36:44.722732: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 0 with properties: \n",
"name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59\n",
"pciBusID: 0000:00:04.0\n",
"2021-05-29 01:36:44.723007: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n",
"2021-05-29 01:36:44.724479: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n",
"2021-05-29 01:36:44.732524: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n",
"2021-05-29 01:36:44.732869: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n",
"2021-05-29 01:36:44.734434: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n",
"2021-05-29 01:36:44.735641: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n",
"2021-05-29 01:36:44.740488: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n",
"2021-05-29 01:36:44.740601: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2021-05-29 01:36:44.741204: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2021-05-29 01:36:44.741716: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1767] Adding visible gpu devices: 0\n",
"2021-05-29 01:36:44.746359: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2199995000 Hz\n",
"2021-05-29 01:36:44.746626: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55572fa36f40 initialized for platform Host (this does not guarantee that XLA will be used). Devices:\n",
"2021-05-29 01:36:44.746657: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version\n",
"2021-05-29 01:36:44.930850: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2021-05-29 01:36:44.931605: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55572fa37100 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\n",
"2021-05-29 01:36:44.931644: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla T4, Compute Capability 7.5\n",
"2021-05-29 01:36:44.931805: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2021-05-29 01:36:44.932373: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 0 with properties: \n",
"name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59\n",
"pciBusID: 0000:00:04.0\n",
"2021-05-29 01:36:44.932430: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n",
"2021-05-29 01:36:44.932452: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n",
"2021-05-29 01:36:44.932472: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n",
"2021-05-29 01:36:44.932492: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n",
"2021-05-29 01:36:44.932511: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n",
"2021-05-29 01:36:44.932528: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n",
"2021-05-29 01:36:44.932545: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n",
"2021-05-29 01:36:44.932611: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2021-05-29 01:36:44.933198: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2021-05-29 01:36:44.933702: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1767] Adding visible gpu devices: 0\n",
"2021-05-29 01:36:44.933766: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n",
"2021-05-29 01:36:44.934740: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1180] Device interconnect StreamExecutor with strength 1 edge matrix:\n",
"2021-05-29 01:36:44.934765: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1186] 0 \n",
"2021-05-29 01:36:44.934776: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 0: N \n",
"2021-05-29 01:36:44.934889: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2021-05-29 01:36:44.935459: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2021-05-29 01:36:44.936055: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.\n",
"2021-05-29 01:36:44.936105: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14161 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5)\n",
"WARNING:tensorflow:From /tensorflow-1.15.2/python3.7/keras/backend/tensorflow_backend.py:4070: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.\n",
"\n",
"/tensorflow-1.15.2/python3.7/keras/engine/saving.py:341: UserWarning: No training configuration found in save file: the model was *not* compiled. Compile it manually.\n",
" warnings.warn('No training configuration found in save file: '\n"
]
}
],
"source": [
"!python ONNX_Convertor/keras-onnx/generate_onnx.py -o snapshots/exp/csv_darknet53s_pan_3_01_converted.onnx snapshots/exp/csv_darknet53s_pan_3_01.h5\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "D4CoXxQvz2wO"
},
"source": [
"We get the onnx model `csv_darknet53s_pan_3_01_converted.onnx` under `snapshots/exp` folder.\n",
"\n",
"\n"
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"collapsed_sections": [],
"name": "tutorial.ipynb",
"provenance": [],
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.5"
}
},
"nbformat": 4,
"nbformat_minor": 0
}