@@ -8,7 +8,7 @@ The most convenient way to run these notebooks is via a docker container, which
8
8
First, clone the repository:
9
9
10
10
```
11
- git clone https://github.com/NVIDIA/Torch- TensorRT
11
+ git clone https://github.com/pytorch/ TensorRT
12
12
```
13
13
14
14
Next, navigate to the repo's root directory:
@@ -23,10 +23,10 @@ At this point, we recommend pulling the [PyTorch container](https://catalog.ngc.
23
23
from [ NVIDIA GPU Cloud] ( https://catalog.ngc.nvidia.com/ ) as follows:
24
24
25
25
```
26
- docker pull nvcr.io/nvidia/pytorch:21.12 -py3
26
+ docker pull nvcr.io/nvidia/pytorch:22.05 -py3
27
27
```
28
28
29
- Replace ``` 21.12 ``` with a different string in the form ``` yy.mm ``` ,
29
+ Replace ``` 22.05 ``` with a different string in the form ``` yy.mm ``` ,
30
30
where ``` yy ``` indicates the last two numbers of a calendar year, and
31
31
``` mm ``` indicates the month in two-digit numerical form, if you wish
32
32
to pull a different version of the container.
@@ -36,14 +36,18 @@ Therefore, you can run the container and the notebooks therein without
36
36
mounting the repo to the container. To do so, run
37
37
38
38
```
39
- docker run --gpus=all --rm -it --net=host --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 nvcr.io/nvidia/pytorch:21.12-py3 bash
39
+ docker run --gpus=all --rm -it --net=host --ipc=host \
40
+ --ulimit memlock=-1 --ulimit stack=67108864 \
41
+ nvcr.io/nvidia/pytorch:22.05-py3 bash
40
42
```
41
43
42
44
If, however, you wish for your work in the notebooks to persist, use the
43
45
``` -v ``` flag to mount the repo to the container as follows:
44
46
45
47
```
46
- docker run --gpus=all --rm -it -v $PWD:/Torch-TensorRT --net=host --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 nvcr.io/nvidia/pytorch:21.12-py3 bash
48
+ docker run --gpus=all --rm -it -v $PWD:/Torch-TensorRT \
49
+ --net=host --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 \
50
+ nvcr.io/nvidia/pytorch:22.05-py3 bash
47
51
```
48
52
49
53
### b. Building a Torch-TensorRT container from source
@@ -57,7 +61,9 @@ docker build -t torch_tensorrt -f ./docker/Dockerfile .
57
61
To run this container, enter the following command:
58
62
59
63
```
60
- docker run --gpus=all --rm -it -v $PWD:/Torch-TensorRT --net=host --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 torch_tensorrt:latest bash
64
+ docker run --gpus=all --rm -it -v $PWD:/Torch-TensorRT \
65
+ --net=host --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 \
66
+ torch_tensorrt:latest bash
61
67
```
62
68
63
69
### c. Running the notebooks inside the container
@@ -100,8 +106,3 @@ Within the container, the notebooks themselves are located at `/Torch-TensorRT/n
100
106
- [ vgg-qat.ipynb] ( vgg-qat.ipynb ) : Quantization Aware Trained models in INT8 using Torch-TensorRT
101
107
- [ EfficientNet-example.ipynb] ( EfficientNet-example.ipynb ) : Simple use of 3rd party PyTorch model library
102
108
- [ CitriNet-example.ipynb] ( CitriNet-example.ipynb ) : Optimizing the Nemo Citrinet acoustic model
103
-
104
-
105
- ``` python
106
-
107
- ```
0 commit comments