Add Colab Tutorial (#7)

* add badge

* Created using Colaboratory

* add read docs

* Fixed readthedocs

* fixed colab ref

* add readthedocs.txt

* add link

* fixed modelzoo link

* add missing reference

* fixed docs

* remove relative path in docs

* add colab in README.md

* update docker image

* add newline

* fixed br
This commit is contained in:
Jerry Jiarui XU 2020-07-10 16:55:47 +08:00 committed by GitHub
parent b2724da80b
commit b72a6d00ef
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
13 changed files with 1485 additions and 35 deletions

7
.readthedocs.yml Normal file
View File

@ -0,0 +1,7 @@
version: 2
python:
version: 3.7
install:
- requirements: requirements/docs.txt
- requirements: requirements/readthedocs.txt

View File

@ -1,6 +1,14 @@
<div align="center"> <div align="center">
<img src="resources/mmseg-logo.png" width="600"/> <img src="resources/mmseg-logo.png" width="600"/>
</div> </div>
<br />
[![docs](https://img.shields.io/badge/docs-latest-blue)](https://mmsegmentation.readthedocs.io/en/latest/)
[![badge](https://github.com/open-mmlab/mmsegmentation/workflows/build/badge.svg)](https://github.com/open-mmlab/mmsegmentation/actions)
[![codecov](https://codecov.io/gh/open-mmlab/mmsegmentation/branch/master/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmsegmentation)
[![license](https://img.shields.io/github/license/open-mmlab/mmsegmentation.svg)](https://github.com/open-mmlab/mmsegmentation/blob/master/LICENSE)
Documentation: https://mmsegmentation.readthedocs.io/
## Introduction ## Introduction
@ -50,6 +58,7 @@ Supported methods:
- [x] [DeepLabV3+](configs/deeplabv3plus) - [x] [DeepLabV3+](configs/deeplabv3plus)
- [x] [UPerNet](configs/upernet) - [x] [UPerNet](configs/upernet)
- [x] [NonLocal Net](configs/nonlocal_net) - [x] [NonLocal Net](configs/nonlocal_net)
- [x] [EncNet](configs/encnet)
- [x] [CCNet](configs/ccnet) - [x] [CCNet](configs/ccnet)
- [x] [DANet](configs/danet) - [x] [DANet](configs/danet)
- [x] [GCNet](configs/gcnet) - [x] [GCNet](configs/gcnet)
@ -65,6 +74,8 @@ Please refer to [INSTALL.md](docs/install.md) for installation and dataset prepa
Please see [getting_started.md](docs/getting_started.md) for the basic usage of MMSegmentation. Please see [getting_started.md](docs/getting_started.md) for the basic usage of MMSegmentation.
There are also tutorials for [adding new dataset](docs/tutorials/new_dataset.md), [designing data pipeline](docs/tutorials/data_pipeline.md), and [adding new modules](docs/tutorials/new_modules.md). There are also tutorials for [adding new dataset](docs/tutorials/new_dataset.md), [designing data pipeline](docs/tutorials/data_pipeline.md), and [adding new modules](docs/tutorials/new_modules.md).
A Colab tutorial is also provided. You may preview the notebook [here](demo/MMSegmentation_Tutorial.ipynb) or directly [run](https://colab.research.google.com/github/open-mmlab/mmsegmentation/blob/master/demo/MMSegmentation_Tutorial.ipynb) on Colab.
## Contributing ## Contributing
We appreciate all contributions to improve MMSegmentation. Please refer to [CONTRIBUTING.md](.github/CONTRIBUTING.md) for the contributing guideline. We appreciate all contributions to improve MMSegmentation. Please refer to [CONTRIBUTING.md](.github/CONTRIBUTING.md) for the contributing guideline.

File diff suppressed because one or more lines are too long

View File

@ -1,4 +1,4 @@
ARG PYTORCH="1.3" ARG PYTORCH="1.5"
ARG CUDA="10.1" ARG CUDA="10.1"
ARG CUDNN="7" ARG CUDNN="7"
@ -14,7 +14,8 @@ RUN apt-get update && apt-get install -y libglib2.0-0 libsm6 libxrender-dev libx
# Install mmsegmentation # Install mmsegmentation
RUN conda clean --all RUN conda clean --all
RUN pip install mmcv-full==latest+torch1.5.0+cu101 -f https://openmmlab.oss-accelerate.aliyuncs.com/mmcv/dist/index.html
RUN git clone https://github.com/open-mmlab/mmsegmenation.git /mmsegmentation RUN git clone https://github.com/open-mmlab/mmsegmenation.git /mmsegmentation
WORKDIR /mmsegmentation WORKDIR /mmsegmentation
ENV FORCE_CUDA="1" RUN pip install -r requirements/build.txt
RUN pip install --no-cache-dir -e . RUN pip install --no-cache-dir -e .

View File

@ -45,7 +45,7 @@ mmsegmentation
The data could be found [here](https://www.cityscapes-dataset.com/downloads/) after registration. The data could be found [here](https://www.cityscapes-dataset.com/downloads/) after registration.
By convention, `**labelTrainIds.png` are used for cityscapes training. By convention, `**labelTrainIds.png` are used for cityscapes training.
We provided a [scripts](../tools/convert_datasets/cityscapes.py) based on [cityscapesscripts](https://github.com/mcordts/cityscapesScripts) We provided a [scripts](https://github.com/open-mmlab/mmsegmentation/blob/master/tools/convert_datasets/cityscapes.py) based on [cityscapesscripts](https://github.com/mcordts/cityscapesScripts)
to generate `**labelTrainIds.png`. to generate `**labelTrainIds.png`.
```shell ```shell
# --nproc means 8 process for conversion, which could be omitted as well. # --nproc means 8 process for conversion, which could be omitted as well.
@ -62,7 +62,7 @@ If you would like to use augmented VOC dataset, please run following command to
python tools/convert_datasets/voc_aug.py data/VOCdevkit data/VOCdevkit/VOCaug --nproc 8 python tools/convert_datasets/voc_aug.py data/VOCdevkit data/VOCdevkit/VOCaug --nproc 8
``` ```
Please refer to [concat dataset](tutorials/new_dataset.md#concatenate-dataset) for details about how to concatenate them and train them together. Please refer to [concat dataset](https://github.com/open-mmlab/mmsegmentation/blob/master/docs/tutorials/new_dataset.md#concatenate-dataset) for details about how to concatenate them and train them together.
### ADE20K ### ADE20K
@ -311,7 +311,6 @@ Params: 48.98 M
(1) FLOPs are related to the input shape while parameters are not. The default input shape is (1, 3, 1280, 800). (1) FLOPs are related to the input shape while parameters are not. The default input shape is (1, 3, 1280, 800).
(2) Some operators are not counted into FLOPs like GN and custom operators. (2) Some operators are not counted into FLOPs like GN and custom operators.
You can add support for new operators by modifying [`mmseg/utils/flops_counter.py`](../mmseg/utils/flops_counter.py).
### Publish a model ### Publish a model

View File

@ -18,6 +18,8 @@ Results are obtained with the script `tools/benchmark.py` which computes the ave
* `whole` mode: The `test_cfg` will be like `dict(mode='whole')`. * `whole` mode: The `test_cfg` will be like `dict(mode='whole')`.
In this mode, the whole imaged will be passed into network directly. In this mode, the whole imaged will be passed into network directly.
By default, we use `slide` inference for 769x769 trained model, `whole` inference for the rest.
* For input size of 8x+1 (e.g. 769), `align_corner=True` is adopted as a traditional practice. * For input size of 8x+1 (e.g. 769), `align_corner=True` is adopted as a traditional practice.
Otherwise, for input size of 8x (e.g. 512, 1024), `align_corner=False` is adopted. Otherwise, for input size of 8x (e.g. 512, 1024), `align_corner=False` is adopted.
@ -25,55 +27,59 @@ Otherwise, for input size of 8x (e.g. 512, 1024), `align_corner=False` is adopte
### FCN ### FCN
Please refer to [FCN](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/fcn) for details. Please refer to [FCN](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn) for details.
### PSPNet ### PSPNet
Please refer to [PSPNet](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/pspnet) for details. Please refer to [PSPNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet) for details.
### DeepLabV3 ### DeepLabV3
Please refer to [DeepLabV3](https://github.com/open-mmlab/mmsegmentatio/tree/master/configs/deeplabv3) for details. Please refer to [DeepLabV3](https://github.com/open-mmlab/mmsegmentatio/blob/master/configs/deeplabv3) for details.
### PSANet ### PSANet
Please refer to [PSANet](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/psanet) for details. Please refer to [PSANet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/psanet) for details.
### DeepLabV3+ ### DeepLabV3+
Please refer to [DeepLabV3+](https://github.com/open-mmlab/mmsegmentatio/tree/master/configs/deeplabv3plus) for details. Please refer to [DeepLabV3+](https://github.com/open-mmlab/mmsegmentatio/blob/master/configs/deeplabv3plus) for details.
### UPerNet ### UPerNet
Please refer to [UPerNet](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/upernet) for details. Please refer to [UPerNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/upernet) for details.
### NonLocal Net ### NonLocal Net
Please refer to [NonLocal Net](https://github.com/open-mmlab/mmsegmentatio/tree/master/configs/nlnet) for details. Please refer to [NonLocal Net](https://github.com/open-mmlab/mmsegmentatio/blob/master/configs/nlnet) for details.
### EncNet
Please refer to [NonLocal Net](https://github.com/open-mmlab/mmsegmentatio/blob/master/configs/encnet) for details.
### CCNet ### CCNet
Please refer to [CCNet](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/ccnet) for details. Please refer to [CCNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet) for details.
### DANet ### DANet
Please refer to [DANet](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/danet) for details. Please refer to [DANet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/danet) for details.
### HRNet ### HRNet
Please refer to [HRNet](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/hrnet) for details. Please refer to [HRNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet) for details.
### GCNet ### GCNet
Please refer to [GCNet](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/gcnet) for details. Please refer to [GCNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/gcnet) for details.
### ANN ### ANN
Please refer to [ANN](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/ann) for details. Please refer to [ANN](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ann) for details.
### OCRNet ### OCRNet
Please refer to [OCRNet](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/ocrnet) for details. Please refer to [OCRNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ocrnet) for details.
## Speed benchmark ## Speed benchmark

View File

@ -4,3 +4,4 @@
new_dataset.md new_dataset.md
data_pipeline.md data_pipeline.md
new_modules.md new_modules.md
training_tricks.md

View File

@ -121,7 +121,7 @@ model = dict(
### Add new heads ### Add new heads
In MMSegmentation, we provide a base [BaseDecodeHead](../../mmseg/models/decode_heads/decode_head.py) for all segmentation head. In MMSegmentation, we provide a base [BaseDecodeHead](https://github.com/open-mmlab/mmsegmentation/blob/master/mmseg/models/decode_heads/decode_head.py) for all segmentation head.
All newly implemented decode heads should be derived from it. All newly implemented decode heads should be derived from it.
Here we show how to develop a new head with the example of [PSPNet](https://arxiv.org/abs/1612.01105) as the following. Here we show how to develop a new head with the example of [PSPNet](https://arxiv.org/abs/1612.01105) as the following.

View File

@ -144,7 +144,7 @@ class ToDataContainer(object):
``dict(key='xxx', **kwargs)``. The ``key`` in result will ``dict(key='xxx', **kwargs)``. The ``key`` in result will
be converted to :obj:`mmcv.DataContainer` with ``**kwargs``. be converted to :obj:`mmcv.DataContainer` with ``**kwargs``.
Default: ``(dict(key='img', stack=True), Default: ``(dict(key='img', stack=True),
dict(key='gt_semantic_seg'))``. dict(key='gt_semantic_seg'))``.
""" """
def __init__(self, def __init__(self,

View File

@ -15,12 +15,15 @@ class Resize(object):
``img_scale`` can either be a tuple (single-scale) or a list of tuple ``img_scale`` can either be a tuple (single-scale) or a list of tuple
(multi-scale). There are 3 multiscale modes: (multi-scale). There are 3 multiscale modes:
- ``ratio_range is not None``: randomly sample a ratio from the ratio range - ``ratio_range is not None``: randomly sample a ratio from the ratio range
and multiply it with the image scale. and multiply it with the image scale.
- ``ratio_range is None and multiscale_mode == "range"``: randomly sample a - ``ratio_range is None and multiscale_mode == "range"``: randomly sample a
scale from the a range. scale from the a range.
- ``ratio_range is None and multiscale_mode == "value"``: randomly sample a - ``ratio_range is None and multiscale_mode == "value"``: randomly sample a
scale from multiple scales. scale from multiple scales.
Args: Args:
img_scale (tuple or list[tuple]): Images scales for resizing. img_scale (tuple or list[tuple]): Images scales for resizing.

View File

@ -330,11 +330,14 @@ class ResNet(nn.Module):
freeze running stats (mean and var). Note: Effect on Batch Norm freeze running stats (mean and var). Note: Effect on Batch Norm
and its variants only. and its variants only.
plugins (list[dict]): List of plugins for stages, each dict contains: plugins (list[dict]): List of plugins for stages, each dict contains:
cfg (dict, required): Cfg dict to build plugin.
position (str, required): Position inside block to insert plugin, - cfg (dict, required): Cfg dict to build plugin.
options: 'after_conv1', 'after_conv2', 'after_conv3'.
stages (tuple[bool], optional): Stages to apply plugin, length - position (str, required): Position inside block to insert plugin,
should be same as 'num_stages' options: 'after_conv1', 'after_conv2', 'after_conv3'.
- stages (tuple[bool], optional): Stages to apply plugin, length
should be same as 'num_stages'
multi_grid (Sequence[int]|None): Multi grid dilation rates of last multi_grid (Sequence[int]|None): Multi grid dilation rates of last
stage. Default: None stage. Default: None
contract_dilation (bool): Whether contract first dilation of each layer contract_dilation (bool): Whether contract first dilation of each layer
@ -675,13 +678,9 @@ class ResNetV1c(ResNet):
class ResNetV1d(ResNet): class ResNetV1d(ResNet):
"""ResNetV1d variant described in [1]_. """ResNetV1d variant described in [1]_.
Compared with default ResNet(ResNetV1b), ResNetV1d replaces the 7x7 conv Compared with default ResNet(ResNetV1b), ResNetV1d replaces the 7x7 conv in
in the input stem with three 3x3 convs. And in the downsampling block, the input stem with three 3x3 convs. And in the downsampling block, a 2x2
a 2x2 avg_pool with stride 2 is added before conv, whose stride is avg_pool with stride 2 is added before conv, whose stride is changed to 1.
changed to 1.
References:
.. [1] https://arxiv.org/pdf/1812.01187.pdf
""" """
def __init__(self, **kwargs): def __init__(self, **kwargs):

4
requirements/docs.txt Normal file
View File

@ -0,0 +1,4 @@
recommonmark
sphinx
sphinx_markdown_tables
sphinx_rtd_theme

View File

@ -0,0 +1,3 @@
mmcv
torch
torchvision