You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Optimize PyTorch* Models using Intel® Extension for PyTorch* sample readme update (#1452)
* Fixes for 2023.1 AI Kit (#1409)
* Intel Python Numpy Numba_dpes kNN sample (#1292)
* *.py and *.ipynb files with implementation
* README.md and sample.json files with documentation
* License and thir party programs
* Adding PyTorch Training Optimizations with AMX BF16 oneAPI sample (#1293)
* add IntelPytorch Quantization code samples (#1301)
* add IntelPytorch Quantization code samples
* fix the spelling error in the README file
* use john's README with grammar fix and title change
* Rename third-party-grograms.txt to third-party-programs.txt
Co-authored-by: Jimmy Wei <[email protected]>
* AMX bfloat16 mixed precision learning TensorFlow Transformer sample (#1317)
* [New Sample] Intel Extension for TensorFlow Getting Started (#1313)
* first draft
* Update README.md
* remove redunant file
* [New Sample] [oneDNN] Benchdnn tutorial (#1315)
* New Sample: benchDNN tutorial
* Update readme: new sample
* Rename sample to benchdnn_tutorial
* Name fix
* Add files via upload (#1320)
* [New Sample] oneCCL Bindings for PyTorch Getting Started (#1316)
* Update README.md
* [New Sample] oneCCL Bindings for PyTorch Getting Started
* Update README.md
* add torch-ccl version check
* [New Sample] Intel Extension for PyTorch Getting Started (#1314)
* add new ipex GSG notebook for dGPU
* Update sample.json
for expertise field
* Update requirements.txt
Update package versions to comply with Snyk tool
* Updated title field in sample.json in TF Transformer AMX bfloat16 Mixed Precision sample to fit within character length range (#1327)
* add arch checker class (#1332)
* change gpu.patch to convert the code samples from cpu to gpu correctly (#1334)
* Fixes for spelling in AMX bfloat16 transformer sample and printing error in python code in numpy vs numba sample (#1335)
* 2023.1 ai kit itex get started example fix (#1338)
* Fix the typo
* Update ResNet50_Inference.ipynb
* fix resnet inference demo link (#1339)
* Fix printing issue in numpy vs numba AI sample (#1356)
* Fix Invalid Kmeans parameters on oneAPI 2023 (#1345)
* Update README to add new samples into the list (#1366)
* PyTorch AMX BF16 Training sample: remove graphs and performance numbers (#1408)
* Adding PyTorch Training Optimizations with AMX BF16 oneAPI sample
* remove performance graphs, update README
* remove graphs from README and folder
* update top README in Features and Functionality
---------
Co-authored-by: krzeszew <[email protected]>
Co-authored-by: alexsin368 <[email protected]>
Co-authored-by: ZhaoqiongZ <[email protected]>
Co-authored-by: Louie Tsai <[email protected]>
Co-authored-by: Orel Yehuda <[email protected]>
Co-authored-by: yuning <[email protected]>
Co-authored-by: Wang, Kai Lawrence <[email protected]>
Co-authored-by: xiguiw <[email protected]>
* Optimize PyTorch* Models using Intel® Extension for PyTorch* readme update
Restructured to match the new readme template. Restructured sections to increase clarity. Added information related to configuring conda as non-root user. Updated some formatting and branding.
---------
Co-authored-by: Jimmy Wei <[email protected]>
Co-authored-by: krzeszew <[email protected]>
Co-authored-by: alexsin368 <[email protected]>
Co-authored-by: ZhaoqiongZ <[email protected]>
Co-authored-by: Louie Tsai <[email protected]>
Co-authored-by: Orel Yehuda <[email protected]>
Co-authored-by: yuning <[email protected]>
Co-authored-by: Wang, Kai Lawrence <[email protected]>
Co-authored-by: xiguiw <[email protected]>
Copy file name to clipboardExpand all lines: AI-and-Analytics/Features-and-Functionality/IntelPyTorch_Extensions_Inference_Optimization/README.md
+39-26
Original file line number
Diff line number
Diff line change
@@ -1,66 +1,72 @@
1
-
# Tutorial: Optimize PyTorch Models using Intel® Extension for PyTorch* (IPEX)
2
-
This notebook guides you through the process of extending your PyTorch code with Intel® Extension for PyTorch* (IPEX) with optimizations to achieve performance boosts on Intel® hardware.
1
+
# `Optimize PyTorch* Models using Intel® Extension for PyTorch* (IPEX)` Sample
2
+
3
+
This notebook guides you through the process of extending your PyTorch* code with Intel® Extension for PyTorch* (IPEX) with optimizations to achieve performance boosts on Intel® hardware.
3
4
4
5
| Area | Description
5
6
|:--- |:---
6
7
| What you will learn | Applying IPEX Optimizations to a PyTorch workload in a step-by-step manner to gain performance boost
7
8
| Time to complete | 30 minutes
9
+
| Category | Code Optimization
8
10
9
11
## Purpose
10
12
11
-
This sample notebook shows how to get started with Intel® Extension for PyTorch* (IPEX) for sample Computer Vision and NLP workloads.
13
+
This sample notebook shows how to get started with Intel® Extension for PyTorch (IPEX) for sample Computer Vision and NLP workloads.
12
14
13
15
The sample starts by loading two models from the PyTorch hub: **Faster-RCNN** (Faster R-CNN) and **distilbert** (DistilBERT). After loading the models, the sample applies sequential optimizations from IPEX and examines performance gains for each incremental change.
14
16
15
17
You can make code changes quickly on top of existing PyTorch code to obtain the performance speedups for model inference.
16
18
17
19
## Prerequisites
18
20
19
-
| Optimized for | Description
20
-
|:--- |:---
21
-
| OS | Ubuntu* 18.04 or newer
22
-
| Hardware | Intel® Xeon® Scalable processor family
23
-
| Software | Intel® AI Analytics Toolkit (AI Kit)
21
+
| Optimized for | Description
22
+
|:--- |:---
23
+
| OS | Ubuntu* 18.04 or newer
24
+
| Hardware | Intel® Xeon® Scalable processor family
25
+
| Software | Intel® AI Analytics Toolkit (AI Kit)
24
26
25
27
### For Local Development Environments
26
28
27
29
You will need to download and install the following toolkits, tools, and components to use the sample.
28
30
29
31
-**Intel® AI Analytics Toolkit (AI Kit)**
30
32
31
-
You can get the AI Kit from [Intel® oneAPI Toolkits](https://www.intel.com/content/www/us/en/developer/tools/oneapi/toolkits.html#analytics-kit). <br> See [*Get Started with the Intel® AI Analytics Toolkit for Linux**](https://www.intel.com/content/www/us/en/develop/documentation/get-started-with-ai-linux) for AI Kit installation information and post-installation steps and scripts.
33
+
You can get the AI Kit from [Intel® oneAPI Toolkits](https://www.intel.com/content/www/us/en/developer/tools/oneapi/toolkits.html#analytics-kit). <br> See [*Get Started with the Intel® AI Analytics Toolkit for Linux**](https://www.intel.com/content/www/us/en/develop/documentation/get-started-with-ai-linux) for AI Kit installation information and post-installation steps and scripts. This sample assumes you have **Matplotlib** installed.
34
+
32
35
33
36
-**Jupyter Notebook**
34
37
35
-
Install using PIP: `$pip install notebook`. <br> Alternatively, see [*Installing Jupyter*](https://jupyter.org/install) for detailed installation instructions.
38
+
Install using PIP: `pip install notebook`. <br> Alternatively, see [*Installing Jupyter*](https://jupyter.org/install) for detailed installation instructions.
36
39
37
40
-**Transformers - Hugging Face**
38
41
39
-
Install using PIP: `$pip install transformers`
42
+
Install using PIP: `pip install transformers`
40
43
41
44
### For Intel® DevCloud
42
45
43
-
Most of necessary tools and components are already installed in the environment. You do not need to install additional components. See [Intel® DevCloud for oneAPI](https://devcloud.intel.com/oneapi/get_started/) for information.
44
-
You would need to install the Hugging Face Transformers library using pip as shown above.
46
+
Most of necessary tools and components are already installed in the environment. You do not need to install additional components. See [Intel® DevCloud for oneAPI](https://devcloud.intel.com/oneapi/get_started/) for information. You would need to install the Hugging Face Transformers library using pip as shown above.
45
47
46
48
## Key Implementation Details
47
49
48
50
This sample tutorial contains one Jupyter Notebook and one Python script.
49
51
50
52
### Jupyter Notebook
51
53
52
-
|Notebook |Description
53
-
|:--- |:---
54
+
|Notebook | Description
55
+
|:--- |:---
54
56
|`optimize_pytorch_models_with_ipex.ipynb` |Gain performance boost during inference using IPEX.
55
57
56
58
### Python Script
57
59
58
-
|Script |Description
59
-
|:--- |:---
60
-
|`resnet50.py`|The script optimizes a Faster R-CNN model to be used with IPEX Launch Script.
60
+
|Script | Description
61
+
|:--- |:---
62
+
|`resnet50.py` |The script optimizes a Faster R-CNN model to be used with IPEX Launch Script.
61
63
62
64
63
-
## Run the Sample on Linux*
65
+
## Set Environment Variables
66
+
67
+
When working with the command-line interface (CLI), you should configure the oneAPI toolkits using environment variables. Set up your CLI environment by sourcing the `setvars` script every time you open a new terminal window. This practice ensures that your compiler, libraries, and tools are ready for development.
68
+
69
+
## Run the `Optimize PyTorch* Models using Intel® Extension for PyTorch* (IPEX)` Sample
64
70
65
71
> **Note**: If you have not already done so, set up your CLI
66
72
> environment by sourcing the `setvars` script in the root of your oneAPI installation.
@@ -84,7 +90,16 @@ This sample tutorial contains one Jupyter Notebook and one Python script.
84
90
85
91
By default, the AI Kit is installed in the `/opt/intel/oneapi` folder and requires root privileges to manage it.
86
92
87
-
You can choose to activate Conda environment without root access. To bypass root access to manage your Conda environment, clone and activate your desired Conda environment using the following commands similar to the following.
93
+
#### Activate Conda without Root Access (Optional)
94
+
95
+
You can choose to activate Conda environment without root access.
96
+
97
+
1. To bypass root access to manage your Conda environment, clone and activate your desired Conda environment using commands similar to the following.
98
+
99
+
```
100
+
conda create --name user_pytorch --clone pytorch
101
+
conda activate user_pytorch
102
+
```
88
103
89
104
#### Run the NoteBook
90
105
@@ -97,14 +112,14 @@ This sample tutorial contains one Jupyter Notebook and one Python script.
97
112
```
98
113
optimize_pytorch_models_with_ipex.ipynb
99
114
```
100
-
4. Change the kernel to **pytorch**.
115
+
4. Change the kernel to **PyTorch (AI Kit)**.
101
116
5. Run every cell in the Notebook in sequence.
102
117
103
118
#### Troubleshooting
104
119
105
120
If you receive an error message, troubleshoot the problem using the **Diagnostics Utility for Intel® oneAPI Toolkits**. The diagnostic utility provides configuration and system checks to help find missing dependencies, permissions errors, and other issues. See the [Diagnostics Utility for Intel® oneAPI Toolkits User Guide](https://www.intel.com/content/www/us/en/develop/documentation/diagnostic-utility-user-guide/top.html) for more information on using the utility.
106
121
107
-
### Run the Sample on Intel® DevCloud
122
+
### Run the Sample on Intel® DevCloud (Optional)
108
123
109
124
1. If you do not already have an account, request an Intel® DevCloud account at [*Create an Intel® DevCloud Account*](https://intelsoftwaresites.secure.force.com/DevCloud/oneapi).
110
125
2. On a Linux* system, open a terminal.
@@ -123,15 +138,13 @@ If you receive an error message, troubleshoot the problem using the **Diagnostic
123
138
124
139
## Example Output
125
140
126
-
Users should be able to see some diagrams for performance comparison and analysis.
127
-
An example of performance comparison for inference speedup obtained by enabling IPEX optimizations.
141
+
Users should be able to see some diagrams for performance comparison and analysis. An example of performance comparison for inference speedup obtained by enabling IPEX optimizations.
Code samples are licensed under the MIT license. See
135
148
[License.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/License.txt) for details.
136
149
137
-
Third party program Licenses can be found here: [third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt).
150
+
Third party program Licenses can be found here: [third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt).
0 commit comments