Add Colab Tutorial (#7)
* add badge * Created using Colaboratory * add read docs * Fixed readthedocs * fixed colab ref * add readthedocs.txt * add link * fixed modelzoo link * add missing reference * fixed docs * remove relative path in docs * add colab in README.md * update docker image * add newline * fixed br
This commit is contained in:
parent
b2724da80b
commit
b72a6d00ef
7
.readthedocs.yml
Normal file
7
.readthedocs.yml
Normal file
@ -0,0 +1,7 @@
|
||||
version: 2
|
||||
|
||||
python:
|
||||
version: 3.7
|
||||
install:
|
||||
- requirements: requirements/docs.txt
|
||||
- requirements: requirements/readthedocs.txt
|
||||
11
README.md
11
README.md
@ -1,6 +1,14 @@
|
||||
<div align="center">
|
||||
<img src="resources/mmseg-logo.png" width="600"/>
|
||||
</div>
|
||||
<br />
|
||||
|
||||
[](https://mmsegmentation.readthedocs.io/en/latest/)
|
||||
[](https://github.com/open-mmlab/mmsegmentation/actions)
|
||||
[](https://codecov.io/gh/open-mmlab/mmsegmentation)
|
||||
[](https://github.com/open-mmlab/mmsegmentation/blob/master/LICENSE)
|
||||
|
||||
Documentation: https://mmsegmentation.readthedocs.io/
|
||||
|
||||
## Introduction
|
||||
|
||||
@ -50,6 +58,7 @@ Supported methods:
|
||||
- [x] [DeepLabV3+](configs/deeplabv3plus)
|
||||
- [x] [UPerNet](configs/upernet)
|
||||
- [x] [NonLocal Net](configs/nonlocal_net)
|
||||
- [x] [EncNet](configs/encnet)
|
||||
- [x] [CCNet](configs/ccnet)
|
||||
- [x] [DANet](configs/danet)
|
||||
- [x] [GCNet](configs/gcnet)
|
||||
@ -65,6 +74,8 @@ Please refer to [INSTALL.md](docs/install.md) for installation and dataset prepa
|
||||
Please see [getting_started.md](docs/getting_started.md) for the basic usage of MMSegmentation.
|
||||
There are also tutorials for [adding new dataset](docs/tutorials/new_dataset.md), [designing data pipeline](docs/tutorials/data_pipeline.md), and [adding new modules](docs/tutorials/new_modules.md).
|
||||
|
||||
A Colab tutorial is also provided. You may preview the notebook [here](demo/MMSegmentation_Tutorial.ipynb) or directly [run](https://colab.research.google.com/github/open-mmlab/mmsegmentation/blob/master/demo/MMSegmentation_Tutorial.ipynb) on Colab.
|
||||
|
||||
## Contributing
|
||||
|
||||
We appreciate all contributions to improve MMSegmentation. Please refer to [CONTRIBUTING.md](.github/CONTRIBUTING.md) for the contributing guideline.
|
||||
|
||||
1416
demo/MMSegmentation_Tutorial.ipynb
Normal file
1416
demo/MMSegmentation_Tutorial.ipynb
Normal file
File diff suppressed because one or more lines are too long
@ -1,4 +1,4 @@
|
||||
ARG PYTORCH="1.3"
|
||||
ARG PYTORCH="1.5"
|
||||
ARG CUDA="10.1"
|
||||
ARG CUDNN="7"
|
||||
|
||||
@ -14,7 +14,8 @@ RUN apt-get update && apt-get install -y libglib2.0-0 libsm6 libxrender-dev libx
|
||||
|
||||
# Install mmsegmentation
|
||||
RUN conda clean --all
|
||||
RUN pip install mmcv-full==latest+torch1.5.0+cu101 -f https://openmmlab.oss-accelerate.aliyuncs.com/mmcv/dist/index.html
|
||||
RUN git clone https://github.com/open-mmlab/mmsegmenation.git /mmsegmentation
|
||||
WORKDIR /mmsegmentation
|
||||
ENV FORCE_CUDA="1"
|
||||
RUN pip install -r requirements/build.txt
|
||||
RUN pip install --no-cache-dir -e .
|
||||
|
||||
@ -45,7 +45,7 @@ mmsegmentation
|
||||
The data could be found [here](https://www.cityscapes-dataset.com/downloads/) after registration.
|
||||
|
||||
By convention, `**labelTrainIds.png` are used for cityscapes training.
|
||||
We provided a [scripts](../tools/convert_datasets/cityscapes.py) based on [cityscapesscripts](https://github.com/mcordts/cityscapesScripts)
|
||||
We provided a [scripts](https://github.com/open-mmlab/mmsegmentation/blob/master/tools/convert_datasets/cityscapes.py) based on [cityscapesscripts](https://github.com/mcordts/cityscapesScripts)
|
||||
to generate `**labelTrainIds.png`.
|
||||
```shell
|
||||
# --nproc means 8 process for conversion, which could be omitted as well.
|
||||
@ -62,7 +62,7 @@ If you would like to use augmented VOC dataset, please run following command to
|
||||
python tools/convert_datasets/voc_aug.py data/VOCdevkit data/VOCdevkit/VOCaug --nproc 8
|
||||
```
|
||||
|
||||
Please refer to [concat dataset](tutorials/new_dataset.md#concatenate-dataset) for details about how to concatenate them and train them together.
|
||||
Please refer to [concat dataset](https://github.com/open-mmlab/mmsegmentation/blob/master/docs/tutorials/new_dataset.md#concatenate-dataset) for details about how to concatenate them and train them together.
|
||||
|
||||
|
||||
### ADE20K
|
||||
@ -311,7 +311,6 @@ Params: 48.98 M
|
||||
|
||||
(1) FLOPs are related to the input shape while parameters are not. The default input shape is (1, 3, 1280, 800).
|
||||
(2) Some operators are not counted into FLOPs like GN and custom operators.
|
||||
You can add support for new operators by modifying [`mmseg/utils/flops_counter.py`](../mmseg/utils/flops_counter.py).
|
||||
|
||||
### Publish a model
|
||||
|
||||
|
||||
@ -18,6 +18,8 @@ Results are obtained with the script `tools/benchmark.py` which computes the ave
|
||||
* `whole` mode: The `test_cfg` will be like `dict(mode='whole')`.
|
||||
|
||||
In this mode, the whole imaged will be passed into network directly.
|
||||
|
||||
By default, we use `slide` inference for 769x769 trained model, `whole` inference for the rest.
|
||||
* For input size of 8x+1 (e.g. 769), `align_corner=True` is adopted as a traditional practice.
|
||||
Otherwise, for input size of 8x (e.g. 512, 1024), `align_corner=False` is adopted.
|
||||
|
||||
@ -25,55 +27,59 @@ Otherwise, for input size of 8x (e.g. 512, 1024), `align_corner=False` is adopte
|
||||
|
||||
### FCN
|
||||
|
||||
Please refer to [FCN](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/fcn) for details.
|
||||
Please refer to [FCN](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn) for details.
|
||||
|
||||
### PSPNet
|
||||
|
||||
Please refer to [PSPNet](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/pspnet) for details.
|
||||
Please refer to [PSPNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet) for details.
|
||||
|
||||
### DeepLabV3
|
||||
|
||||
Please refer to [DeepLabV3](https://github.com/open-mmlab/mmsegmentatio/tree/master/configs/deeplabv3) for details.
|
||||
Please refer to [DeepLabV3](https://github.com/open-mmlab/mmsegmentatio/blob/master/configs/deeplabv3) for details.
|
||||
|
||||
### PSANet
|
||||
|
||||
Please refer to [PSANet](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/psanet) for details.
|
||||
Please refer to [PSANet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/psanet) for details.
|
||||
|
||||
### DeepLabV3+
|
||||
|
||||
Please refer to [DeepLabV3+](https://github.com/open-mmlab/mmsegmentatio/tree/master/configs/deeplabv3plus) for details.
|
||||
Please refer to [DeepLabV3+](https://github.com/open-mmlab/mmsegmentatio/blob/master/configs/deeplabv3plus) for details.
|
||||
|
||||
### UPerNet
|
||||
|
||||
Please refer to [UPerNet](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/upernet) for details.
|
||||
Please refer to [UPerNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/upernet) for details.
|
||||
|
||||
### NonLocal Net
|
||||
|
||||
Please refer to [NonLocal Net](https://github.com/open-mmlab/mmsegmentatio/tree/master/configs/nlnet) for details.
|
||||
Please refer to [NonLocal Net](https://github.com/open-mmlab/mmsegmentatio/blob/master/configs/nlnet) for details.
|
||||
|
||||
### EncNet
|
||||
|
||||
Please refer to [NonLocal Net](https://github.com/open-mmlab/mmsegmentatio/blob/master/configs/encnet) for details.
|
||||
|
||||
### CCNet
|
||||
|
||||
Please refer to [CCNet](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/ccnet) for details.
|
||||
Please refer to [CCNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet) for details.
|
||||
|
||||
### DANet
|
||||
|
||||
Please refer to [DANet](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/danet) for details.
|
||||
Please refer to [DANet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/danet) for details.
|
||||
|
||||
### HRNet
|
||||
|
||||
Please refer to [HRNet](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/hrnet) for details.
|
||||
Please refer to [HRNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet) for details.
|
||||
|
||||
### GCNet
|
||||
|
||||
Please refer to [GCNet](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/gcnet) for details.
|
||||
Please refer to [GCNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/gcnet) for details.
|
||||
|
||||
### ANN
|
||||
|
||||
Please refer to [ANN](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/ann) for details.
|
||||
Please refer to [ANN](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ann) for details.
|
||||
|
||||
### OCRNet
|
||||
|
||||
Please refer to [OCRNet](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/ocrnet) for details.
|
||||
Please refer to [OCRNet](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ocrnet) for details.
|
||||
|
||||
## Speed benchmark
|
||||
|
||||
|
||||
@ -4,3 +4,4 @@
|
||||
new_dataset.md
|
||||
data_pipeline.md
|
||||
new_modules.md
|
||||
training_tricks.md
|
||||
|
||||
@ -121,7 +121,7 @@ model = dict(
|
||||
|
||||
### Add new heads
|
||||
|
||||
In MMSegmentation, we provide a base [BaseDecodeHead](../../mmseg/models/decode_heads/decode_head.py) for all segmentation head.
|
||||
In MMSegmentation, we provide a base [BaseDecodeHead](https://github.com/open-mmlab/mmsegmentation/blob/master/mmseg/models/decode_heads/decode_head.py) for all segmentation head.
|
||||
All newly implemented decode heads should be derived from it.
|
||||
Here we show how to develop a new head with the example of [PSPNet](https://arxiv.org/abs/1612.01105) as the following.
|
||||
|
||||
|
||||
@ -144,7 +144,7 @@ class ToDataContainer(object):
|
||||
``dict(key='xxx', **kwargs)``. The ``key`` in result will
|
||||
be converted to :obj:`mmcv.DataContainer` with ``**kwargs``.
|
||||
Default: ``(dict(key='img', stack=True),
|
||||
dict(key='gt_semantic_seg'))``.
|
||||
dict(key='gt_semantic_seg'))``.
|
||||
"""
|
||||
|
||||
def __init__(self,
|
||||
|
||||
@ -15,12 +15,15 @@ class Resize(object):
|
||||
|
||||
``img_scale`` can either be a tuple (single-scale) or a list of tuple
|
||||
(multi-scale). There are 3 multiscale modes:
|
||||
|
||||
- ``ratio_range is not None``: randomly sample a ratio from the ratio range
|
||||
and multiply it with the image scale.
|
||||
and multiply it with the image scale.
|
||||
|
||||
- ``ratio_range is None and multiscale_mode == "range"``: randomly sample a
|
||||
scale from the a range.
|
||||
scale from the a range.
|
||||
|
||||
- ``ratio_range is None and multiscale_mode == "value"``: randomly sample a
|
||||
scale from multiple scales.
|
||||
scale from multiple scales.
|
||||
|
||||
Args:
|
||||
img_scale (tuple or list[tuple]): Images scales for resizing.
|
||||
|
||||
@ -330,11 +330,14 @@ class ResNet(nn.Module):
|
||||
freeze running stats (mean and var). Note: Effect on Batch Norm
|
||||
and its variants only.
|
||||
plugins (list[dict]): List of plugins for stages, each dict contains:
|
||||
cfg (dict, required): Cfg dict to build plugin.
|
||||
position (str, required): Position inside block to insert plugin,
|
||||
options: 'after_conv1', 'after_conv2', 'after_conv3'.
|
||||
stages (tuple[bool], optional): Stages to apply plugin, length
|
||||
should be same as 'num_stages'
|
||||
|
||||
- cfg (dict, required): Cfg dict to build plugin.
|
||||
|
||||
- position (str, required): Position inside block to insert plugin,
|
||||
options: 'after_conv1', 'after_conv2', 'after_conv3'.
|
||||
|
||||
- stages (tuple[bool], optional): Stages to apply plugin, length
|
||||
should be same as 'num_stages'
|
||||
multi_grid (Sequence[int]|None): Multi grid dilation rates of last
|
||||
stage. Default: None
|
||||
contract_dilation (bool): Whether contract first dilation of each layer
|
||||
@ -675,13 +678,9 @@ class ResNetV1c(ResNet):
|
||||
class ResNetV1d(ResNet):
|
||||
"""ResNetV1d variant described in [1]_.
|
||||
|
||||
Compared with default ResNet(ResNetV1b), ResNetV1d replaces the 7x7 conv
|
||||
in the input stem with three 3x3 convs. And in the downsampling block,
|
||||
a 2x2 avg_pool with stride 2 is added before conv, whose stride is
|
||||
changed to 1.
|
||||
|
||||
References:
|
||||
.. [1] https://arxiv.org/pdf/1812.01187.pdf
|
||||
Compared with default ResNet(ResNetV1b), ResNetV1d replaces the 7x7 conv in
|
||||
the input stem with three 3x3 convs. And in the downsampling block, a 2x2
|
||||
avg_pool with stride 2 is added before conv, whose stride is changed to 1.
|
||||
"""
|
||||
|
||||
def __init__(self, **kwargs):
|
||||
|
||||
4
requirements/docs.txt
Normal file
4
requirements/docs.txt
Normal file
@ -0,0 +1,4 @@
|
||||
recommonmark
|
||||
sphinx
|
||||
sphinx_markdown_tables
|
||||
sphinx_rtd_theme
|
||||
3
requirements/readthedocs.txt
Normal file
3
requirements/readthedocs.txt
Normal file
@ -0,0 +1,3 @@
|
||||
mmcv
|
||||
torch
|
||||
torchvision
|
||||
Loading…
x
Reference in New Issue
Block a user