Skip to content

Commit db6e65d

Browse files
authored
Merge pull request #1102 from pytorch/fix_notebooks_directory_readme
Modified the notebooks directory's README file
2 parents ce9f641 + 6b5bee1 commit db6e65d

File tree

1 file changed

+12
-11
lines changed

1 file changed

+12
-11
lines changed

notebooks/README.md

Lines changed: 12 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ The most convenient way to run these notebooks is via a docker container, which
88
First, clone the repository:
99

1010
```
11-
git clone https://github.com/NVIDIA/Torch-TensorRT
11+
git clone https://github.com/pytorch/TensorRT
1212
```
1313

1414
Next, navigate to the repo's root directory:
@@ -23,10 +23,10 @@ At this point, we recommend pulling the [PyTorch container](https://catalog.ngc.
2323
from [NVIDIA GPU Cloud](https://catalog.ngc.nvidia.com/) as follows:
2424

2525
```
26-
docker pull nvcr.io/nvidia/pytorch:21.12-py3
26+
docker pull nvcr.io/nvidia/pytorch:22.05-py3
2727
```
2828

29-
Replace ```21.12``` with a different string in the form ```yy.mm```,
29+
Replace ```22.05``` with a different string in the form ```yy.mm```,
3030
where ```yy``` indicates the last two numbers of a calendar year, and
3131
```mm``` indicates the month in two-digit numerical form, if you wish
3232
to pull a different version of the container.
@@ -36,14 +36,18 @@ Therefore, you can run the container and the notebooks therein without
3636
mounting the repo to the container. To do so, run
3737

3838
```
39-
docker run --gpus=all --rm -it --net=host --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 nvcr.io/nvidia/pytorch:21.12-py3 bash
39+
docker run --gpus=all --rm -it --net=host --ipc=host \
40+
--ulimit memlock=-1 --ulimit stack=67108864 \
41+
nvcr.io/nvidia/pytorch:22.05-py3 bash
4042
```
4143

4244
If, however, you wish for your work in the notebooks to persist, use the
4345
```-v``` flag to mount the repo to the container as follows:
4446

4547
```
46-
docker run --gpus=all --rm -it -v $PWD:/Torch-TensorRT --net=host --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 nvcr.io/nvidia/pytorch:21.12-py3 bash
48+
docker run --gpus=all --rm -it -v $PWD:/Torch-TensorRT \
49+
--net=host --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 \
50+
nvcr.io/nvidia/pytorch:22.05-py3 bash
4751
```
4852

4953
### b. Building a Torch-TensorRT container from source
@@ -57,7 +61,9 @@ docker build -t torch_tensorrt -f ./docker/Dockerfile .
5761
To run this container, enter the following command:
5862

5963
```
60-
docker run --gpus=all --rm -it -v $PWD:/Torch-TensorRT --net=host --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 torch_tensorrt:latest bash
64+
docker run --gpus=all --rm -it -v $PWD:/Torch-TensorRT \
65+
--net=host --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 \
66+
torch_tensorrt:latest bash
6167
```
6268

6369
### c. Running the notebooks inside the container
@@ -100,8 +106,3 @@ Within the container, the notebooks themselves are located at `/Torch-TensorRT/n
100106
- [vgg-qat.ipynb](vgg-qat.ipynb): Quantization Aware Trained models in INT8 using Torch-TensorRT
101107
- [EfficientNet-example.ipynb](EfficientNet-example.ipynb): Simple use of 3rd party PyTorch model library
102108
- [CitriNet-example.ipynb](CitriNet-example.ipynb): Optimizing the Nemo Citrinet acoustic model
103-
104-
105-
```python
106-
107-
```

0 commit comments

Comments
 (0)