Skip to content

Commit 5bca91c

Browse files
authored
[New Sample] Intel Extension for PyTorch Getting Started (#1314)
* add new ipex GSG notebook for dGPU * Update sample.json for expertise field
1 parent 23f50dc commit 5bca91c

File tree

3 files changed

+535
-98
lines changed

3 files changed

+535
-98
lines changed
Original file line numberDiff line numberDiff line change
@@ -1,134 +1,143 @@
1-
# `Intel Extension For PyTorch Hello World` Sample
2-
Intel® Extension for PyTorch\* extends PyTorch with optimizations for extra performance boost on Intel hardware. Most of the optimizations will be included in stock PyTorch releases eventually, and the intention of the extension is to deliver up-to-date features and optimizations for PyTorch on Intel hardware, examples include AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX).
1+
# `Intel® Extension for PyTorch* Getting Started` Sample
32

4-
| Optimized for | Description
5-
|:--- |:---
6-
| OS | Linux\* Ubuntu\* 18.04
7-
| Hardware | Intel® Xeon® Scalable Processor family
8-
| Software | Intel® AI Analytics Toolkit (AI Kit)
9-
| What you will learn | How to get started with Intel® Optimization for PyTorch
10-
| Time to complete | 15 minutes
3+
Intel® Extension for PyTorch* extends PyTorch* with optimizations for extra performance boost on Intel hardware. Most of the optimizations will be included in stock PyTorch* releases eventually, and the intention of the extension is to deliver up-to-date features and optimizations for PyTorch* on Intel hardware, examples include AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX).
114

12-
## Purpose
13-
This sample code shows how to get started with Intel Optimization for PyTorch. It implements an example neural network with one convolution layer, one normalization layer and one ReLU layer. Developers can quickly build and train a PyTorch\* neural network using a simple python code. Also, by controlling the build-in environment variable, the sample attempts to show how Intel® DNNL Primitives are called explicitly and their performance during PyTorch\* model training and inference with Intel® Extension for PyTorch\*.
14-
15-
Intel® Extension for PyTorch\* is available as part of Intel® AI Analytics Toolkit. For more information on the optimizations as well as performance data, see [this blog](http://software.intel.com/en-us/articles/intel-and-facebook-collaborate-to-boost-pytorch-cpu-performance).
5+
This sample contains a Jupyter* NoteBook that guides you through the process of running a PyTorch* inference workload on both GPU and CPU by using Intel® AI Analytics Toolkit (AI Kit) and also analyze the GPU and CPU usage via Intel® oneAPI Deep Neural Network Library (oneDNN) verbose logs.
166

17-
More examples can be found at [Examples](https://intel.github.io/intel-extension-for-pytorch/tutorials/examples.html)
7+
| Area | Description
8+
|:--- |:---
9+
| What you will learn | How to get started with Intel® Extension for PyTorch
10+
| Time to complete | 15 minutes
1811

19-
## Key implementation details
20-
This Hello World sample code is implemented for CPU using the Python language.
12+
## Prerequisites
2113

22-
Please **export the environment variable `DNNL_VERBOSE=1`** to display the deep learning primitives trace during execution.
14+
| Optimized for | Description
15+
|:--- |:---
16+
| OS | Ubuntu* 22.04
17+
| Hardware | Intel® Xeon® scalable processor family <br> Intel® Data Center GPUs
18+
| Software | Intel® AI Analytics Toolkit (AI Kit)
2319

24-
### Notes
25-
- The test dataset is inherited from `torch.utils.data.Dataset`.
26-
- The model is inherited from `torch.nn.Module`.
20+
## Purpose
2721

28-
## License
29-
Code samples are licensed under the MIT license. See
30-
[License.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/License.txt) for details.
22+
This sample code demonstrates how to begin using the Intel® Extension for PyTorch*.
3123

32-
Third party program Licenses can be found here: [third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt)
24+
The sample implements an example neural network with one convolution layer, one normalization layer, and one ReLU layer.
3325

26+
You can quickly build and train a PyTorch* neural network using the simple Python code. Also, by controlling the built-in environment variable, the sample attempts to show how Intel® DNNL Primitives are called explicitly and shows the performance during PyTorch* model training and inference with Intel® Extension for PyTorch*.
3427

35-
## How to Build and Run
28+
The Jupyter notebook in this sample also guides users how to change PyTorch* codes to run on Intel® Data Center GPU family and how to validate the GPU or CPU usages for PyTorch* workloads on Intel CPU or GPU.
3629

37-
> **Note**: If you have not already done so, set up your CLI
38-
> environment by sourcing the `setvars` script located in
39-
> the root of your oneAPI installation.
40-
>
41-
> Linux Sudo: . /opt/intel/oneapi/setvars.sh
42-
>
43-
> Linux User: . ~/intel/oneapi/setvars.sh
44-
>
45-
> Windows: C:\Program Files(x86)\Intel\oneAPI\setvars.bat
30+
>**Note**: Intel® Extension for PyTorch* is available as part of Intel® AI Analytics Toolkit. For more information on the optimizations as well as performance data, see [*Intel and Facebook* collaborate to boost PyTorch* CPU performance*](http://software.intel.com/en-us/articles/intel-and-facebook-collaborate-to-boost-pytorch-cpu-performance).
4631
>
47-
>For more information on environment variables, see Use the setvars Script for [Linux or macOS](https://www.intel.com/content/www/us/en/develop/documentation/oneapi-programming-guide/top/oneapi-development-environment-setup/use-the-setvars-script-with-linux-or-macos.html), or [Windows](https://www.intel.com/content/www/us/en/develop/documentation/oneapi-programming-guide/top/oneapi-development-environment-setup/use-the-setvars-script-with-windows.html).
32+
>Find more examples in the [*Examples*](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/examples.html) topic of the [*Intel® Extension for PyTorch* Documentation*](https://intel.github.io/intel-extension-for-pytorch).
4833
49-
1. Activate the conda environment:
50-
51-
```
52-
conda activate pytorch
53-
```
54-
55-
2. Activate conda environment Without Root Access (Optional)
5634

57-
By default, the Intel AI Analytics toolkit is installed in the inteloneapi
58-
folder, which requires root privileges to manage it. If you would like to
59-
bypass using root access to manage your conda environment, then you can clone
60-
your desired conda environment using the following command:
35+
## Key Implementation Details
6136

62-
```
63-
conda create --name user_pytorch --clone pytorch
64-
```
37+
The sample uses pretrained model provided by Intel and published as part of [Intel Model Zoo](https://github.com/IntelAI/models). The example also illustrates how to utilize TensorFlow* and Intel® Math Kernel Library (Intel® MKL) runtime settings to maximize CPU performance on ResNet50 workload.
6538

66-
Then activate your conda environment with the following command:
39+
- The Jupyter Notebook, `ResNet50_Inference.ipynb`, is implemented for both CPU and GPU using Intel® Extension for PyTorch*.
40+
- The `Intel_Extension_For_PyTorch_Hello_World.py` script is implemented for CPU using the Python language.
41+
- You must export the environment variable `DNNL_VERBOSE=1` to display the deep learning primitives trace during execution.
6742

68-
```
69-
conda activate user_pytorch
70-
```
43+
> **Note**: The test dataset is inherited from `torch.utils.data.Dataset`, and the model is inherited from `torch.nn.Module`.
7144
72-
3. Navigate to the directory with the sample:
73-
```
74-
cd ~/oneAPI-samples/AI-and-Analytics/Getting-Started-Samples/Intel_Extension_For_PyTorch_GettingStarted
75-
```
45+
## Run the `Intel® Extension for PyTorch* Getting Started` Sample
7646

77-
4. Run the Python script
78-
To run the program on Linux\*, Windows\* and MacOS\*, type the following command in the terminal with Python installed:
47+
### On Linux*
7948

80-
```
81-
python Intel_Extension_For_PyTorch_Hello_World.py
82-
```
49+
> **Note**: If you have not already done so, set up your CLI
50+
> environment by sourcing the `setvars` script in the root of your oneAPI installation.
51+
>
52+
> Linux*:
53+
> - For system wide installations: `. /opt/intel/oneapi/setvars.sh`
54+
> - For private installations: ` . ~/intel/oneapi/setvars.sh`
55+
> - For non-POSIX shells, like csh, use the following command: `bash -c 'source <install-dir>/setvars.sh ; exec csh'`
56+
>
57+
> For more information on configuring environment variables, see *[Use the setvars Script with Linux* or macOS*](https://www.intel.com/content/www/us/en/develop/documentation/oneapi-programming-guide/top/oneapi-development-environment-setup/use-the-setvars-script-with-linux-or-macos.html)*.
8358
84-
You will see the DNNL verbose trace after exporting the `DNNL_VERBOSE`:
59+
#### Activate Conda
8560

86-
```
87-
export DNNL_VERBOSE=1
88-
```
61+
1. Activate the conda environment:
62+
```
63+
conda activate pytorch
64+
```
8965

90-
Please find more information about the mkldnn log [here](https://oneapi-src.github.io/oneDNN/dev_guide_verbose.html).
66+
2. Activate conda environment without Root access (Optional).
9167

68+
By default, the AI Kit is installed in the `/opt/intel/oneapi` folder and requires root privileges to manage it.
9269

93-
### Example of Output
94-
With successful execution, it will print out `[CODE_SAMPLE_COMPLETED_SUCCESSFULLY]` in the terminal.
70+
You can choose to activate Conda environment without root access. To bypass root access to manage your Conda environment, clone and activate your desired Conda environment using the following commands similar to the following.
71+
```
72+
conda create --name user_pytorch --clone pytorch
73+
```
74+
Then activate your conda environment with the following command:
75+
```
76+
conda activate user_pytorch
77+
```
78+
#### Run the Script
9579

96-
### Running The Sample In DevCloud (Optional)
80+
1. Navigate to the directory with the sample.
81+
```
82+
cd ~/oneAPI-samples/AI-and-Analytics/Getting-Started-Samples/Intel_Extension_For_PyTorch_GettingStarted
83+
```
84+
2. Run the Python script.
85+
```
86+
python Intel_Extension_For_PyTorch_Hello_World.py
87+
```
88+
You will see the DNNL verbose trace after exporting the `DNNL_VERBOSE`:
89+
```
90+
export DNNL_VERBOSE=1
91+
```
92+
>**Note**: Read more information about the mkldnn log at [https://oneapi-src.github.io/oneDNN/dev_guide_verbose.html](https://oneapi-src.github.io/oneDNN/dev_guide_verbose.html).
93+
94+
#### Run the Jupyter Notebook
95+
96+
1. Change to the sample directory.
97+
2. Launch Jupyter Notebook.
98+
```
99+
jupyter notebook --ip=0.0.0.0 --port 8888 --allow-root
100+
```
101+
3. Follow the instructions to open the URL with the token in your browser.
102+
4. Locate and select the Notebook.
103+
```
104+
ResNet50_Inference.ipynb
105+
```
106+
5. Change your Jupyter Notebook kernel to **PyTorch**.
107+
6. Run every cell in the Notebook in sequence.
97108

98-
Please refer to [using samples in DevCloud](https://github.com/intel-ai-tce/oneAPI-samples/blob/devcloud/AI-and-Analytics/README.md#using-samples-in-intel-oneapi-devcloud) for general usage instructions.
109+
### Troubleshooting
99110

100-
### Submit The Sample in Batch Mode
111+
If you receive an error message, troubleshoot the problem using the **Diagnostics Utility for Intel® oneAPI Toolkits**. The diagnostic utility provides configuration and system checks to help find missing dependencies, permissions errors, and other issues. See the *[Diagnostics Utility for Intel® oneAPI Toolkits User Guide](https://www.intel.com/content/www/us/en/develop/documentation/diagnostic-utility-user-guide/top.html)* for more information on using the utility.
101112

102-
1. Navigate to the directory with the TensorFlow sample:
103-
```
104-
cd ~/oneAPI-samples/AI-and-Analytics/Getting-Started-Samples/Intel_Extension_For_PyTorch_GettingStarted
105-
```
106-
2. Submit this "Intel_Extension_For_PyTorch_GettingStarted" workload on the selected node with the run script.
107-
```
108-
./q ./run.sh
109-
```
110-
> the run.sh contains all the instructions needed to run this "Intel_Extension_For_PyTorch_Hello_World" workload
113+
### Run the `Intel® Extension for PyTorch* Getting Started` Sample on Intel® DevCloud (Optional)
111114

112-
### Build and run additional samples
113-
Several sample programs are available for you to try, many of which can be compiled and run in a similar fashion. Experiment with running the various samples on different kinds of compute nodes or adjust their source code to experiment with different workloads.
115+
1. If you do not already have an account, request an Intel® DevCloud account at [*Create an Intel® DevCloud Account*](https://intelsoftwaresites.secure.force.com/DevCloud/oneapi).
116+
2. On a Linux* system, open a terminal.
117+
3. SSH into Intel® DevCloud.
118+
```
119+
ssh DevCloud
120+
```
121+
> **Note**: You can find information about configuring your Linux system and connecting to Intel DevCloud at Intel® DevCloud for oneAPI [Get Started](https://DevCloud.intel.com/oneapi/get_started).
114122
115-
### Troubleshooting
116-
If an error occurs, troubleshoot the problem using the Diagnostics Utility for Intel® oneAPI Toolkits.
117-
[Learn more](https://software.intel.com/content/www/us/en/develop/documentation/diagnostic-utility-user-guide/top.html)
123+
124+
4. Navigate to the directory with the sample.
125+
```
126+
cd ~/oneAPI-samples/AI-and-Analytics/Getting-Started-Samples/Intel_Extension_For_PyTorch_GettingStarted
127+
```
128+
5. Submit this `Intel_Extension_For_PyTorch_GettingStarted` workload on the selected node with the run script.
129+
```
130+
./q ./run.sh
131+
```
132+
The `run.sh` script contains all the instructions needed to run this `Intel_Extension_For_PyTorch_Hello_World.py` workload.
118133

119-
### Using Visual Studio Code\* (Optional)
134+
### Example Output
120135

121-
You can use Visual Studio Code (VS Code) extensions to set your environment, create launch configurations,
122-
and browse and download samples.
136+
With successful execution, it will print out `[CODE_SAMPLE_COMPLETED_SUCCESSFULLY]` in the terminal.
123137

124-
The basic steps to build and run a sample using VS Code include:
125-
- Download a sample using the extension **Code Sample Browser for Intel oneAPI Toolkits**.
126-
- Configure the oneAPI environment with the extension **Environment Configurator for Intel oneAPI Toolkits**.
127-
- Open a Terminal in VS Code (**Terminal>New Terminal**).
128-
- Run the sample in the VS Code terminal using the instructions below.
129-
- (Linux only) Debug your GPU application with GDB for Intel® oneAPI toolkits using the Generate Launch Configurations extension.
138+
## License
130139

131-
To learn more about the extensions, see
132-
[Using Visual Studio Code with Intel® oneAPI Toolkits](https://software.intel.com/content/www/us/en/develop/documentation/using-vs-code-with-intel-oneapi/top.html).
140+
Code samples are licensed under the MIT license. See
141+
[License.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/License.txt) for details.
133142

134-
After learning how to use the extensions for Intel oneAPI Toolkits, return to this readme for instructions on how to build and run a sample.
143+
Third party program Licenses can be found here: [third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt).

0 commit comments

Comments
 (0)