Skip to content

Commit 3d6da1d

Browse files
authored
Update docs (#879)
* Add docs config * Fix mock * Add huggingface_hub to reqs for docs * Remove from mocks * Fix * Change theme * Fix * Fix * Update emoji * Table of content * Links in doc * Update content * Update examples * Update * Update * Add save load
1 parent b948136 commit 3d6da1d

11 files changed

+150
-25
lines changed

docs/conf.py

+4-4
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@
1616

1717
import sys
1818
import datetime
19-
import sphinx_rtd_theme
19+
# import sphinx_rtd_theme
2020

2121
sys.path.append("..")
2222

@@ -67,13 +67,13 @@ def get_version():
6767
# a list of builtin themes.
6868
#
6969

70-
html_theme = "sphinx_rtd_theme"
71-
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
70+
# html_theme = "sphinx_rtd_theme"
71+
# html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
7272

7373
# import karma_sphinx_theme
7474
# html_theme = "karma_sphinx_theme"
7575

76-
html_theme = "faculty_sphinx_theme"
76+
html_theme = "sphinx_book_theme"
7777

7878
# import catalyst_sphinx_theme
7979
# html_theme = "catalyst_sphinx_theme"

docs/encoders.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
🏔 Available Encoders
1+
🔍 Available Encoders
22
=====================
33

44
ResNet

docs/encoders_timm.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
🪐 Timm Encoders
1+
🎯 Timm Encoders
22
~~~~~~~~~~~~~~~~
33

44
Pytorch Image Models (a.k.a. timm) has a lot of pretrained models and interface which allows using these models as encoders in smp,

docs/index.rst

+1
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@ Welcome to Segmentation Models's documentation!
1717
encoders_timm
1818
losses
1919
metrics
20+
save_load
2021
insights
2122

2223

docs/insights.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
🔧 Insights
1+
💡 Insights
22
===========
33

44
1. Models architecture

docs/install.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
🛠 Installation
1+
⚙️ Installation
22
===============
33

44
PyPI version:

docs/metrics.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
📈 Metrics
1+
📏 Metrics
22
==========
33

44
Functional metrics

docs/models.rst

+38-10
Original file line numberDiff line numberDiff line change
@@ -1,40 +1,68 @@
1-
📦 Segmentation Models
1+
🕸️ Segmentation Models
22
==============================
33

4+
5+
.. contents::
6+
:local:
7+
8+
.. _unet:
9+
410
Unet
511
~~~~
612
.. autoclass:: segmentation_models_pytorch.Unet
713

14+
15+
.. _unetplusplus:
16+
817
Unet++
918
~~~~~~
1019
.. autoclass:: segmentation_models_pytorch.UnetPlusPlus
1120

12-
MAnet
13-
~~~~~~
14-
.. autoclass:: segmentation_models_pytorch.MAnet
1521

16-
Linknet
17-
~~~~~~~
18-
.. autoclass:: segmentation_models_pytorch.Linknet
22+
.. _fpn:
1923

2024
FPN
2125
~~~
2226
.. autoclass:: segmentation_models_pytorch.FPN
2327

28+
29+
.. _pspnet:
30+
2431
PSPNet
2532
~~~~~~
2633
.. autoclass:: segmentation_models_pytorch.PSPNet
2734

28-
PAN
29-
~~~
30-
.. autoclass:: segmentation_models_pytorch.PAN
35+
36+
.. _deeplabv3:
3137

3238
DeepLabV3
3339
~~~~~~~~~
3440
.. autoclass:: segmentation_models_pytorch.DeepLabV3
3541

42+
43+
.. _deeplabv3plus:
44+
3645
DeepLabV3+
3746
~~~~~~~~~~
3847
.. autoclass:: segmentation_models_pytorch.DeepLabV3Plus
3948

4049

50+
.. _linknet:
51+
52+
Linknet
53+
~~~~~~~
54+
.. autoclass:: segmentation_models_pytorch.Linknet
55+
56+
57+
.. _manet:
58+
59+
MAnet
60+
~~~~~~
61+
.. autoclass:: segmentation_models_pytorch.MAnet
62+
63+
64+
.. _pan:
65+
66+
PAN
67+
~~~
68+
.. autoclass:: segmentation_models_pytorch.PAN

docs/quickstart.rst

+24-4
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
Quick Start
1+
🚀 Quick Start
22
==============
33

44
**1. Create segmentation model**
@@ -16,8 +16,9 @@ Segmentation model is just a PyTorch nn.Module, which can be created as easy as:
1616
classes=3, # model output channels (number of classes in your dataset)
1717
)
1818
19-
- see table with available model architectures
20-
- see table with avaliable encoders and its corresponding weights
19+
- Check the page with available :doc:`model architectures <models>`.
20+
- Check the table with :doc:`available ported encoders and its corresponding weights <encoders>`.
21+
- `Pytorch Image Models (timm) <https://github.com/huggingface/pytorch-image-models>`_ encoders are also supported, check it :doc:`here<encoders_timm>`.
2122

2223
**2. Configure data preprocessing**
2324

@@ -33,4 +34,23 @@ All encoders have pretrained weights. Preparing your data the same way as during
3334
**3. Congratulations!** 🎉
3435

3536

36-
You are done! Now you can train your model with your favorite framework!
37+
You are done! Now you can train your model with your favorite framework, or as simple as:
38+
39+
.. code-block:: python
40+
41+
for images, gt_masks in dataloader:
42+
43+
predicted_mask = model(image)
44+
loss = loss_fn(predicted_mask, gt_masks)
45+
46+
loss.backward()
47+
optimizer.step()
48+
49+
Check the following examples:
50+
51+
.. |colab-badge| image:: https://colab.research.google.com/assets/colab-badge.svg
52+
:target: https://colab.research.google.com/github/qubvel/segmentation_models.pytorch/blob/master/examples/binary_segmentation_intro.ipynb
53+
:alt: Open In Colab
54+
55+
- Finetuning notebook on Oxford Pet dataset with `PyTorch Lightning <https://github.com/qubvel/segmentation_models.pytorch/blob/master/examples/binary_segmentation_intro.ipynb>`_ |colab-badge|
56+
- Finetuning script for cloth segmentation with `PyTorch Lightning <https://github.com/ternaus/cloths_segmentation>`_

docs/requirements.txt

+4-2
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,5 @@
1-
faculty-sphinx-theme==0.2.2
1+
sphinx<7
2+
sphinx-book-theme==1.1.2
23
six==1.15.0
3-
autodocsumm
4+
autodocsumm
5+
huggingface_hub

docs/save_load.rst

+74
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,74 @@
1+
📂 Saving and Loading
2+
=====================
3+
4+
In this section, we will discuss how to save a trained model, push it to the Hugging Face Hub, and load it back for later use.
5+
6+
Saving and Sharing a Model
7+
--------------------------
8+
9+
Once you have trained your model, you can save it using the `.save_pretrained` method. This method saves the model configuration and weights to a directory of your choice.
10+
And, optionally, you can push the model to the Hugging Face Hub by setting the `push_to_hub` parameter to `True`.
11+
12+
For example:
13+
14+
.. code:: python
15+
16+
import segmentation_models_pytorch as smp
17+
18+
model = smp.Unet('resnet34', encoder_weights='imagenet')
19+
20+
# After training your model, save it to a directory
21+
model.save_pretrained('./my_model')
22+
23+
# Or saved and pushed to the Hub simultaneously
24+
model.save_pretrained('username/my-model', push_to_hub=True)
25+
26+
Loading Trained Model
27+
---------------------
28+
29+
Once your model is saved and pushed to the Hub, you can load it back using the `smp.from_pretrained` method. This method allows you to load the model weights and configuration from a directory or directly from the Hub.
30+
31+
For example:
32+
33+
.. code:: python
34+
35+
import segmentation_models_pytorch as smp
36+
37+
# Load the model from the local directory
38+
model = smp.from_pretrained('./my_model')
39+
40+
# Alternatively, load the model directly from the Hugging Face Hub
41+
model = smp.from_pretrained('username/my-model')
42+
43+
Saving model Metrics and Dataset Name
44+
-------------------------------------
45+
46+
You can simply pass the `metrics` and `dataset` parameters to the `save_pretrained` method to save the model metrics and dataset name in Model Card along with the model configuration and weights.
47+
48+
For example:
49+
50+
.. code:: python
51+
52+
import segmentation_models_pytorch as smp
53+
54+
model = smp.Unet('resnet34', encoder_weights='imagenet')
55+
56+
# After training your model, save it to a directory
57+
model.save_pretrained('./my_model', metrics={'accuracy': 0.95}, dataset='my_dataset')
58+
59+
# Or saved and pushed to the Hub simultaneously
60+
model.save_pretrained('username/my-model', push_to_hub=True, metrics={'accuracy': 0.95}, dataset='my_dataset')
61+
62+
63+
Conclusion
64+
----------
65+
66+
By following these steps, you can easily save, share, and load your models, facilitating collaboration and reproducibility in your projects. Don't forget to replace the placeholders with your actual model paths and names.
67+
68+
|colab-badge|
69+
70+
.. |colab-badge| image:: https://colab.research.google.com/assets/colab-badge.svg
71+
:target: https://colab.research.google.com/github/qubvel/segmentation_models.pytorch/blob/master/examples/binary_segmentation_intro.ipynb
72+
:alt: Open In Colab
73+
74+

0 commit comments

Comments
 (0)