Skip to content

Commit 4191749

Browse files
jkinskyjimmytweikrzeszewalexsin368ZhaoqiongZ
authored
Optimize PyTorch* Models using Intel® Extension for PyTorch* sample readme update (#1452)
* Fixes for 2023.1 AI Kit (#1409) * Intel Python Numpy Numba_dpes kNN sample (#1292) * *.py and *.ipynb files with implementation * README.md and sample.json files with documentation * License and thir party programs * Adding PyTorch Training Optimizations with AMX BF16 oneAPI sample (#1293) * add IntelPytorch Quantization code samples (#1301) * add IntelPytorch Quantization code samples * fix the spelling error in the README file * use john's README with grammar fix and title change * Rename third-party-grograms.txt to third-party-programs.txt Co-authored-by: Jimmy Wei <[email protected]> * AMX bfloat16 mixed precision learning TensorFlow Transformer sample (#1317) * [New Sample] Intel Extension for TensorFlow Getting Started (#1313) * first draft * Update README.md * remove redunant file * [New Sample] [oneDNN] Benchdnn tutorial (#1315) * New Sample: benchDNN tutorial * Update readme: new sample * Rename sample to benchdnn_tutorial * Name fix * Add files via upload (#1320) * [New Sample] oneCCL Bindings for PyTorch Getting Started (#1316) * Update README.md * [New Sample] oneCCL Bindings for PyTorch Getting Started * Update README.md * add torch-ccl version check * [New Sample] Intel Extension for PyTorch Getting Started (#1314) * add new ipex GSG notebook for dGPU * Update sample.json for expertise field * Update requirements.txt Update package versions to comply with Snyk tool * Updated title field in sample.json in TF Transformer AMX bfloat16 Mixed Precision sample to fit within character length range (#1327) * add arch checker class (#1332) * change gpu.patch to convert the code samples from cpu to gpu correctly (#1334) * Fixes for spelling in AMX bfloat16 transformer sample and printing error in python code in numpy vs numba sample (#1335) * 2023.1 ai kit itex get started example fix (#1338) * Fix the typo * Update ResNet50_Inference.ipynb * fix resnet inference demo link (#1339) * Fix printing issue in numpy vs numba AI sample (#1356) * Fix Invalid Kmeans parameters on oneAPI 2023 (#1345) * Update README to add new samples into the list (#1366) * PyTorch AMX BF16 Training sample: remove graphs and performance numbers (#1408) * Adding PyTorch Training Optimizations with AMX BF16 oneAPI sample * remove performance graphs, update README * remove graphs from README and folder * update top README in Features and Functionality --------- Co-authored-by: krzeszew <[email protected]> Co-authored-by: alexsin368 <[email protected]> Co-authored-by: ZhaoqiongZ <[email protected]> Co-authored-by: Louie Tsai <[email protected]> Co-authored-by: Orel Yehuda <[email protected]> Co-authored-by: yuning <[email protected]> Co-authored-by: Wang, Kai Lawrence <[email protected]> Co-authored-by: xiguiw <[email protected]> * Optimize PyTorch* Models using Intel® Extension for PyTorch* readme update Restructured to match the new readme template. Restructured sections to increase clarity. Added information related to configuring conda as non-root user. Updated some formatting and branding. --------- Co-authored-by: Jimmy Wei <[email protected]> Co-authored-by: krzeszew <[email protected]> Co-authored-by: alexsin368 <[email protected]> Co-authored-by: ZhaoqiongZ <[email protected]> Co-authored-by: Louie Tsai <[email protected]> Co-authored-by: Orel Yehuda <[email protected]> Co-authored-by: yuning <[email protected]> Co-authored-by: Wang, Kai Lawrence <[email protected]> Co-authored-by: xiguiw <[email protected]>
1 parent c457e68 commit 4191749

File tree

1 file changed

+39
-26
lines changed
  • AI-and-Analytics/Features-and-Functionality/IntelPyTorch_Extensions_Inference_Optimization

1 file changed

+39
-26
lines changed

AI-and-Analytics/Features-and-Functionality/IntelPyTorch_Extensions_Inference_Optimization/README.md

+39-26
Original file line numberDiff line numberDiff line change
@@ -1,66 +1,72 @@
1-
# Tutorial: Optimize PyTorch Models using Intel® Extension for PyTorch* (IPEX)
2-
This notebook guides you through the process of extending your PyTorch code with Intel® Extension for PyTorch* (IPEX) with optimizations to achieve performance boosts on Intel® hardware.
1+
# `Optimize PyTorch* Models using Intel® Extension for PyTorch* (IPEX)` Sample
2+
3+
This notebook guides you through the process of extending your PyTorch* code with Intel® Extension for PyTorch* (IPEX) with optimizations to achieve performance boosts on Intel® hardware.
34

45
| Area | Description
56
|:--- |:---
67
| What you will learn | Applying IPEX Optimizations to a PyTorch workload in a step-by-step manner to gain performance boost
78
| Time to complete | 30 minutes
9+
| Category | Code Optimization
810

911
## Purpose
1012

11-
This sample notebook shows how to get started with Intel® Extension for PyTorch* (IPEX) for sample Computer Vision and NLP workloads.
13+
This sample notebook shows how to get started with Intel® Extension for PyTorch (IPEX) for sample Computer Vision and NLP workloads.
1214

1315
The sample starts by loading two models from the PyTorch hub: **Faster-RCNN** (Faster R-CNN) and **distilbert** (DistilBERT). After loading the models, the sample applies sequential optimizations from IPEX and examines performance gains for each incremental change.
1416

1517
You can make code changes quickly on top of existing PyTorch code to obtain the performance speedups for model inference.
1618

1719
## Prerequisites
1820

19-
| Optimized for | Description
20-
|:--- |:---
21-
| OS | Ubuntu* 18.04 or newer
22-
| Hardware | Intel® Xeon® Scalable processor family
23-
| Software | Intel® AI Analytics Toolkit (AI Kit)
21+
| Optimized for | Description
22+
|:--- |:---
23+
| OS | Ubuntu* 18.04 or newer
24+
| Hardware | Intel® Xeon® Scalable processor family
25+
| Software | Intel® AI Analytics Toolkit (AI Kit)
2426

2527
### For Local Development Environments
2628

2729
You will need to download and install the following toolkits, tools, and components to use the sample.
2830

2931
- **Intel® AI Analytics Toolkit (AI Kit)**
3032

31-
You can get the AI Kit from [Intel® oneAPI Toolkits](https://www.intel.com/content/www/us/en/developer/tools/oneapi/toolkits.html#analytics-kit). <br> See [*Get Started with the Intel® AI Analytics Toolkit for Linux**](https://www.intel.com/content/www/us/en/develop/documentation/get-started-with-ai-linux) for AI Kit installation information and post-installation steps and scripts.
33+
You can get the AI Kit from [Intel® oneAPI Toolkits](https://www.intel.com/content/www/us/en/developer/tools/oneapi/toolkits.html#analytics-kit). <br> See [*Get Started with the Intel® AI Analytics Toolkit for Linux**](https://www.intel.com/content/www/us/en/develop/documentation/get-started-with-ai-linux) for AI Kit installation information and post-installation steps and scripts. This sample assumes you have **Matplotlib** installed.
34+
3235

3336
- **Jupyter Notebook**
3437

35-
Install using PIP: `$pip install notebook`. <br> Alternatively, see [*Installing Jupyter*](https://jupyter.org/install) for detailed installation instructions.
38+
Install using PIP: `pip install notebook`. <br> Alternatively, see [*Installing Jupyter*](https://jupyter.org/install) for detailed installation instructions.
3639

3740
- **Transformers - Hugging Face**
3841

39-
Install using PIP: `$pip install transformers`
42+
Install using PIP: `pip install transformers`
4043

4144
### For Intel® DevCloud
4245

43-
Most of necessary tools and components are already installed in the environment. You do not need to install additional components. See [Intel® DevCloud for oneAPI](https://devcloud.intel.com/oneapi/get_started/) for information.
44-
You would need to install the Hugging Face Transformers library using pip as shown above.
46+
Most of necessary tools and components are already installed in the environment. You do not need to install additional components. See [Intel® DevCloud for oneAPI](https://devcloud.intel.com/oneapi/get_started/) for information. You would need to install the Hugging Face Transformers library using pip as shown above.
4547

4648
## Key Implementation Details
4749

4850
This sample tutorial contains one Jupyter Notebook and one Python script.
4951

5052
### Jupyter Notebook
5153

52-
|Notebook |Description
53-
|:--- |:---
54+
| Notebook | Description
55+
|:--- |:---
5456
|`optimize_pytorch_models_with_ipex.ipynb` |Gain performance boost during inference using IPEX.
5557

5658
### Python Script
5759

58-
|Script |Description
59-
|:--- |:---
60-
|`resnet50.py` |The script optimizes a Faster R-CNN model to be used with IPEX Launch Script.
60+
| Script | Description
61+
|:--- |:---
62+
|`resnet50.py` |The script optimizes a Faster R-CNN model to be used with IPEX Launch Script.
6163

6264

63-
## Run the Sample on Linux*
65+
## Set Environment Variables
66+
67+
When working with the command-line interface (CLI), you should configure the oneAPI toolkits using environment variables. Set up your CLI environment by sourcing the `setvars` script every time you open a new terminal window. This practice ensures that your compiler, libraries, and tools are ready for development.
68+
69+
## Run the `Optimize PyTorch* Models using Intel® Extension for PyTorch* (IPEX)` Sample
6470

6571
> **Note**: If you have not already done so, set up your CLI
6672
> environment by sourcing the `setvars` script in the root of your oneAPI installation.
@@ -84,7 +90,16 @@ This sample tutorial contains one Jupyter Notebook and one Python script.
8490
8591
By default, the AI Kit is installed in the `/opt/intel/oneapi` folder and requires root privileges to manage it.
8692
87-
You can choose to activate Conda environment without root access. To bypass root access to manage your Conda environment, clone and activate your desired Conda environment using the following commands similar to the following.
93+
#### Activate Conda without Root Access (Optional)
94+
95+
You can choose to activate Conda environment without root access.
96+
97+
1. To bypass root access to manage your Conda environment, clone and activate your desired Conda environment using commands similar to the following.
98+
99+
```
100+
conda create --name user_pytorch --clone pytorch
101+
conda activate user_pytorch
102+
```
88103
89104
#### Run the NoteBook
90105
@@ -97,14 +112,14 @@ This sample tutorial contains one Jupyter Notebook and one Python script.
97112
```
98113
optimize_pytorch_models_with_ipex.ipynb
99114
```
100-
4. Change the kernel to **pytorch**.
115+
4. Change the kernel to **PyTorch (AI Kit)**.
101116
5. Run every cell in the Notebook in sequence.
102117
103118
#### Troubleshooting
104119
105120
If you receive an error message, troubleshoot the problem using the **Diagnostics Utility for Intel® oneAPI Toolkits**. The diagnostic utility provides configuration and system checks to help find missing dependencies, permissions errors, and other issues. See the [Diagnostics Utility for Intel® oneAPI Toolkits User Guide](https://www.intel.com/content/www/us/en/develop/documentation/diagnostic-utility-user-guide/top.html) for more information on using the utility.
106121
107-
### Run the Sample on Intel® DevCloud
122+
### Run the Sample on Intel® DevCloud (Optional)
108123
109124
1. If you do not already have an account, request an Intel® DevCloud account at [*Create an Intel® DevCloud Account*](https://intelsoftwaresites.secure.force.com/DevCloud/oneapi).
110125
2. On a Linux* system, open a terminal.
@@ -123,15 +138,13 @@ If you receive an error message, troubleshoot the problem using the **Diagnostic
123138
124139
## Example Output
125140
126-
Users should be able to see some diagrams for performance comparison and analysis.
127-
An example of performance comparison for inference speedup obtained by enabling IPEX optimizations.
141+
Users should be able to see some diagrams for performance comparison and analysis. An example of performance comparison for inference speedup obtained by enabling IPEX optimizations.
128142
129143
![Performance Numbers](images/performance_numbers.png)
130144
131-
132145
## License
133146
134147
Code samples are licensed under the MIT license. See
135148
[License.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/License.txt) for details.
136149
137-
Third party program Licenses can be found here: [third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt).
150+
Third party program Licenses can be found here: [third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt).

0 commit comments

Comments
 (0)