Skip to content

Commit 3f98763

Browse files
committed
Removing formatting issues.
1 parent 27295ea commit 3f98763

18 files changed

+53
-357
lines changed

.ci/docker/requirements.txt

+3-3
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ tqdm==4.66.1
1414
numpy==1.24.4
1515
matplotlib
1616
librosa
17-
torch==2.7
17+
torch==2.6
1818
torchvision
1919
torchdata
2020
networkx
@@ -67,7 +67,7 @@ iopath
6767
pygame==2.6.0
6868
pycocotools
6969
semilearn==0.3.2
70-
torchao==0.10.0
70+
torchao==0.5.0
7171
segment_anything==1.0
7272
torchrec==1.1.0; platform_system == "Linux"
73-
fbgemm-gpu==1.2.0; platform_system == "Linux"
73+
fbgemm-gpu==1.1.0; platform_system == "Linux"

.jenkins/build.sh

+5-2
Original file line numberDiff line numberDiff line change
@@ -22,10 +22,13 @@ sudo apt-get install -y pandoc
2222
#Install PyTorch Nightly for test.
2323
# Nightly - pip install --pre torch torchvision torchaudio -f https://download.pytorch.org/whl/nightly/cu102/torch_nightly.html
2424
# Install 2.5 to merge all 2.4 PRs - uncomment to install nightly binaries (update the version as needed).
25+
# sudo pip uninstall -y torch torchvision torchaudio torchtext torchdata
26+
# sudo pip3 install torch==2.6.0 torchvision --no-cache-dir --index-url https://download.pytorch.org/whl/test/cu124
2527
# sudo pip uninstall -y fbgemm-gpu torchrec
26-
# sudo pip uninstall -y torch torchvision torchaudio torchtext torchdata torchrl tensordict
2728
# sudo pip3 install fbgemm-gpu==1.1.0 torchrec==1.0.0 --no-cache-dir --index-url https://download.pytorch.org/whl/test/cu124
28-
# pip3 install torch==2.7.0 torchvision torchaudio --no-cache-dir --index-url https://download.pytorch.org/whl/test/cu126
29+
sudo pip uninstall -y torch torchvision torchaudio torchtext torchdata torchrl tensordict
30+
pip3 install torch==2.7.0 torchvision torchaudio --no-cache-dir --index-url https://download.pytorch.org/whl/test/cu126
31+
#sudo pip uninstall -y fbgemm-gpu
2932
# Install two language tokenizers for Translation with TorchText tutorial
3033
python -m spacy download en_core_web_sm
3134
python -m spacy download de_core_news_sm

.jenkins/metadata.json

-3
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,4 @@
11
{
2-
"recipes_source/torch_logs.py": {
3-
"duration": 0
4-
},
52
"intermediate_source/ax_multiobjective_nas_tutorial.py": {
63
"extra_files": ["intermediate_source/mnist_train_nas.py"],
74
"duration": 2000

.jenkins/validate_tutorials_built.py

+9-2
Original file line numberDiff line numberDiff line change
@@ -50,8 +50,15 @@
5050
"intermediate_source/flask_rest_api_tutorial",
5151
"intermediate_source/text_to_speech_with_torchaudio",
5252
"intermediate_source/tensorboard_profiler_tutorial", # reenable after 2.0 release.
53-
"intermediate_source/torchrec_intro_tutorial", # reenable after 3302 is fixe
54-
"intermediate_source/memory_format_tutorial"
53+
"advanced_source/semi_structured_sparse", # reenable after 3303 is fixed.
54+
"intermediate_source/mario_rl_tutorial", # reenable after 3302 is fixed
55+
"intermediate_source/reinforcement_ppo", # reenable after 3302 is fixed
56+
"intermediate_source/pinmem_nonblock", # reenable after 3302 is fixed
57+
"intermediate_source/dqn_with_rnn_tutorial", # reenable after 3302 is fixed
58+
"advanced_source/pendulum", # reenable after 3302 is fixed
59+
"advanced_source/coding_ddpg", # reenable after 3302 is fixed
60+
"intermediate_source/torchrec_intro_tutorial", # reenable after 3302 is fixed
61+
"recipes_source/recipes/reasoning_about_shapes" # reenable after 3326 is fixed
5562
]
5663

5764
def tutorial_source_dirs() -> List[Path]:

advanced_source/sharding.rst

-4
Original file line numberDiff line numberDiff line change
@@ -22,10 +22,6 @@ We highly recommend CUDA when using torchRec. If using CUDA: - cuda >=
2222
!sudo chmod +x Miniconda3-py37_4.9.2-Linux-x86_64.sh
2323
!sudo bash ./Miniconda3-py37_4.9.2-Linux-x86_64.sh -b -f -p /usr/local
2424
25-
.. code:: python
26-
27-
# install pytorch with cudatoolkit 11.3
28-
!sudo conda install pytorch cudatoolkit=11.3 -c pytorch-nightly -y
2925
3026
Installing torchRec will also install
3127
`FBGEMM <https://github.com/pytorch/fbgemm>`__, a collection of CUDA

advanced_source/torch_script_custom_ops.rst

+2
Original file line numberDiff line numberDiff line change
@@ -190,6 +190,8 @@ Environment setup
190190
We need an installation of PyTorch and OpenCV. The easiest and most platform
191191
independent way to get both is to via Conda::
192192

193+
.. these need to be updated
194+
193195
conda install -c pytorch pytorch
194196
conda install opencv
195197

beginner_source/colab.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ PyTorch Version in Google Colab
1111
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1212

1313
Wen you are running a tutorial that requires a version of PyTorch that has
14-
just been released, that version might not be yet available in Google Colab.
14+
jst been released, that version might not be yet available in Google Colab.
1515
To check that you have the required ``torch`` and compatible domain libraries
1616
installed, run ``!pip list``.
1717

@@ -27,7 +27,7 @@ Using Tutorial Data from Google Drive in Colab
2727
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2828

2929
We've added a new feature to tutorials that allows users to open the
30-
notebook associated with a tutorial in Google Colab. You may need to
30+
ntebook associated with a tutorial in Google Colab. You may need to
3131
copy data to your Google drive account to get the more complex tutorials
3232
to work.
3333

beginner_source/introyt/captumyt.py

+24-26
Original file line numberDiff line numberDiff line change
@@ -106,14 +106,7 @@
106106
- Matplotlib version 3.3.4, since Captum currently uses a Matplotlib
107107
function whose arguments have been renamed in later versions
108108
109-
To install Captum in an Anaconda or pip virtual environment, use the
110-
appropriate command for your environment below:
111-
112-
With ``conda``:
113-
114-
.. code-block:: sh
115-
116-
conda install pytorch torchvision captum flask-compress matplotlib=3.3.4 -c pytorch
109+
To install Captum, use the appropriate command for your environment below:
117110
118111
With ``pip``:
119112
@@ -127,51 +120,56 @@
127120
128121
A First Example
129122
---------------
130-
123+
131124
To start, let’s take a simple, visual example. We’ll start with a ResNet
132125
model pretrained on the ImageNet dataset. We’ll get a test input, and
133126
use different **Feature Attribution** algorithms to examine how the
134127
input images affect the output, and see a helpful visualization of this
135128
input attribution map for some test images.
136-
137-
First, some imports:
138129
139-
"""
130+
First, some imports:
140131
141-
import torch
142-
import torch.nn.functional as F
143-
import torchvision.transforms as transforms
144-
import torchvision.models as models
132+
"""
145133

146-
import captum
147-
from captum.attr import IntegratedGradients, Occlusion, LayerGradCam, LayerAttribution
148-
from captum.attr import visualization as viz
134+
import json
149135

150136
import os, sys
151-
import json
152137

153-
import numpy as np
154-
from PIL import Image
138+
import captum
155139
import matplotlib.pyplot as plt
140+
141+
import numpy as np
142+
import torch
143+
import torch.nn.functional as F
144+
import torchvision.models as models
145+
import torchvision.transforms as transforms
146+
from captum.attr import (
147+
IntegratedGradients,
148+
LayerAttribution,
149+
LayerGradCam,
150+
Occlusion,
151+
visualization as viz,
152+
)
156153
from matplotlib.colors import LinearSegmentedColormap
154+
from PIL import Image
157155

158156

159157
#########################################################################
160158
# Now we’ll use the TorchVision model library to download a pretrained
161159
# ResNet. Since we’re not training, we’ll place it in evaluation mode for
162160
# now.
163-
#
161+
#
164162

165-
model = models.resnet18(weights='IMAGENET1K_V1')
163+
model = models.resnet18(weights="IMAGENET1K_V1")
166164
model = model.eval()
167165

168166

169167
#######################################################################
170168
# The place where you got this interactive notebook should also have an
171169
# ``img`` folder with a file ``cat.jpg`` in it.
172-
#
170+
#
173171

174-
test_img = Image.open('img/cat.jpg')
172+
test_img = Image.open("img/cat.jpg")
175173
test_img_data = np.asarray(test_img)
176174
plt.imshow(test_img_data)
177175
plt.show()

beginner_source/introyt/tensorboardyt_tutorial.py

-7
Original file line numberDiff line numberDiff line change
@@ -24,13 +24,6 @@
2424
To run this tutorial, you’ll need to install PyTorch, TorchVision,
2525
Matplotlib, and TensorBoard.
2626
27-
With ``conda``:
28-
29-
.. code-block:: sh
30-
31-
conda install pytorch torchvision -c pytorch
32-
conda install matplotlib tensorboard
33-
3427
With ``pip``:
3528
3629
.. code-block:: sh

conf.py

-6
Original file line numberDiff line numberDiff line change
@@ -99,16 +99,10 @@
9999

100100
def reset_seeds(gallery_conf, fname):
101101
torch.cuda.empty_cache()
102-
torch.backends.cudnn.deterministic = True
103-
torch.backends.cudnn.benchmark = False
104-
torch._dynamo.reset()
105-
torch._inductor.config.force_disable_caches = True
106102
torch.manual_seed(42)
107103
torch.set_default_device(None)
108104
random.seed(10)
109105
numpy.random.seed(10)
110-
torch.set_grad_enabled(True)
111-
112106
gc.collect()
113107

114108
sphinx_gallery_conf = {

en-wordlist.txt

-11
Original file line numberDiff line numberDiff line change
@@ -698,14 +698,3 @@ TorchServe
698698
Inductor’s
699699
onwards
700700
recompilations
701-
BiasCorrection
702-
ELU
703-
GELU
704-
NNCF
705-
OpenVINO
706-
OpenVINOQuantizer
707-
PReLU
708-
Quantizer
709-
SmoothQuant
710-
quantizer
711-
quantizers

intermediate_source/dist_tuto.rst

+1
Original file line numberDiff line numberDiff line change
@@ -523,6 +523,7 @@ for an available MPI implementation. The following steps install the MPI
523523
backend, by installing PyTorch `from
524524
source <https://github.com/pytorch/pytorch#from-source>`__.
525525

526+
.. needs an update
526527
1. Create and activate your Anaconda environment, install all the
527528
pre-requisites following `the
528529
guide <https://github.com/pytorch/pytorch#from-source>`__, but do

intermediate_source/memory_format_tutorial.py

-16
Original file line numberDiff line numberDiff line change
@@ -376,22 +376,6 @@ def attribute(m):
376376
for (k, v) in attrs.items():
377377
setattr(m, k, v)
378378

379-
import gc
380-
import sys
381-
382-
torch._dynamo.reset()
383-
if torch.cuda.is_available():
384-
torch.cuda.empty_cache()
385-
386-
gc.collect()
387-
388-
# Clear any references to the wrapper functions
389-
del old_attrs
390-
del contains_cl
391-
del print_inputs
392-
del check_wrapper
393-
del attribute
394-
395379
######################################################################
396380
# Work to do
397381
# ----------

intermediate_source/torch_compile_tutorial.py

+4-13
Original file line numberDiff line numberDiff line change
@@ -101,11 +101,8 @@ def forward(self, x):
101101
return torch.nn.functional.relu(self.lin(x))
102102

103103
mod = MyModule()
104-
mod.compile()
105-
print(mod(t))
106-
## or:
107-
# opt_mod = torch.compile(mod)
108-
# print(opt_mod(t))
104+
opt_mod = torch.compile(mod)
105+
print(opt_mod(t))
109106

110107
######################################################################
111108
# torch.compile and Nested Calls
@@ -138,8 +135,8 @@ def forward(self, x):
138135
return torch.nn.functional.relu(self.outer_lin(x))
139136

140137
outer_mod = OuterModule()
141-
outer_mod.compile()
142-
print(outer_mod(t))
138+
opt_outer_mod = torch.compile(outer_mod)
139+
print(opt_outer_mod(t))
143140

144141
######################################################################
145142
# We can also disable some functions from being compiled by using
@@ -200,12 +197,6 @@ def outer_function():
200197
# 4. **Compile Leaf Functions First:** In complex models with multiple nested
201198
# functions and modules, start by compiling the leaf functions or modules first.
202199
# For more information see `TorchDynamo APIs for fine-grained tracing <https://pytorch.org/docs/stable/torch.compiler_fine_grain_apis.html>`__.
203-
#
204-
# 5. **Prefer ``mod.compile()`` over ``torch.compile(mod)``:** Avoids ``_orig_`` prefix issues in ``state_dict``.
205-
#
206-
# 6. **Use ``fullgraph=True`` to catch graph breaks:** Helps ensure end-to-end compilation, maximizing speedup
207-
# and compatibility with ``torch.export``.
208-
209200

210201
######################################################################
211202
# Demonstrating Speedups

prototype_source/inductor_windows.rst

+3-2
Original file line numberDiff line numberDiff line change
@@ -22,9 +22,10 @@ Install a Compiler
2222

2323
C++ compiler is required for TorchInductor optimization, let's take Microsoft Visual C++ (MSVC) as an example.
2424

25-
#. Download and install `MSVC <https://visualstudio.microsoft.com/downloads/>`_.
25+
1. Download and install `MSVC <https://visualstudio.microsoft.com/downloads/>`_.
2626

27-
#. During Installation, select **Workloads** and then **Desktop & Mobile**. Select a checkmark on **Desktop Development with C++** and install.
27+
1. During Installation, select **Workloads** and then **Desktop & Mobile**.
28+
1. Select a checkmark on **Desktop Development with C++** and install.
2829

2930
.. image:: ../_static/img/install_msvc.png
3031

0 commit comments

Comments
 (0)