You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Ai and analytics features and functionality intel tensor flow enabling auto mixed precision for transfer learning (#1461)
* Fixes for 2023.1 AI Kit (#1409)
* Intel Python Numpy Numba_dpes kNN sample (#1292)
* *.py and *.ipynb files with implementation
* README.md and sample.json files with documentation
* License and thir party programs
* Adding PyTorch Training Optimizations with AMX BF16 oneAPI sample (#1293)
* add IntelPytorch Quantization code samples (#1301)
* add IntelPytorch Quantization code samples
* fix the spelling error in the README file
* use john's README with grammar fix and title change
* Rename third-party-grograms.txt to third-party-programs.txt
Co-authored-by: Jimmy Wei <[email protected]>
* AMX bfloat16 mixed precision learning TensorFlow Transformer sample (#1317)
* [New Sample] Intel Extension for TensorFlow Getting Started (#1313)
* first draft
* Update README.md
* remove redunant file
* [New Sample] [oneDNN] Benchdnn tutorial (#1315)
* New Sample: benchDNN tutorial
* Update readme: new sample
* Rename sample to benchdnn_tutorial
* Name fix
* Add files via upload (#1320)
* [New Sample] oneCCL Bindings for PyTorch Getting Started (#1316)
* Update README.md
* [New Sample] oneCCL Bindings for PyTorch Getting Started
* Update README.md
* add torch-ccl version check
* [New Sample] Intel Extension for PyTorch Getting Started (#1314)
* add new ipex GSG notebook for dGPU
* Update sample.json
for expertise field
* Update requirements.txt
Update package versions to comply with Snyk tool
* Updated title field in sample.json in TF Transformer AMX bfloat16 Mixed Precision sample to fit within character length range (#1327)
* add arch checker class (#1332)
* change gpu.patch to convert the code samples from cpu to gpu correctly (#1334)
* Fixes for spelling in AMX bfloat16 transformer sample and printing error in python code in numpy vs numba sample (#1335)
* 2023.1 ai kit itex get started example fix (#1338)
* Fix the typo
* Update ResNet50_Inference.ipynb
* fix resnet inference demo link (#1339)
* Fix printing issue in numpy vs numba AI sample (#1356)
* Fix Invalid Kmeans parameters on oneAPI 2023 (#1345)
* Update README to add new samples into the list (#1366)
* PyTorch AMX BF16 Training sample: remove graphs and performance numbers (#1408)
* Adding PyTorch Training Optimizations with AMX BF16 oneAPI sample
* remove performance graphs, update README
* remove graphs from README and folder
* update top README in Features and Functionality
---------
Co-authored-by: krzeszew <[email protected]>
Co-authored-by: alexsin368 <[email protected]>
Co-authored-by: ZhaoqiongZ <[email protected]>
Co-authored-by: Louie Tsai <[email protected]>
Co-authored-by: Orel Yehuda <[email protected]>
Co-authored-by: yuning <[email protected]>
Co-authored-by: Wang, Kai Lawrence <[email protected]>
Co-authored-by: xiguiw <[email protected]>
* Updated Enable Auto-Mixed Precision for Transfer Learning with TensorFlow readme
Updated slightly since I had already restructured a version of this readme prior to original submission. Updated name in readme to match the sample.json. Corrected some branding and formatting.
---------
Co-authored-by: Jimmy Wei <[email protected]>
Co-authored-by: krzeszew <[email protected]>
Co-authored-by: alexsin368 <[email protected]>
Co-authored-by: ZhaoqiongZ <[email protected]>
Co-authored-by: Louie Tsai <[email protected]>
Co-authored-by: Orel Yehuda <[email protected]>
Co-authored-by: yuning <[email protected]>
Co-authored-by: Wang, Kai Lawrence <[email protected]>
Co-authored-by: xiguiw <[email protected]>
Copy file name to clipboardExpand all lines: AI-and-Analytics/Features-and-Functionality/IntelTensorFlow_Enabling_Auto_Mixed_Precision_for_TransferLearning/README.md
+17-12
Original file line number
Diff line number
Diff line change
@@ -1,19 +1,21 @@
1
-
# Enabling Auto-Mixed Precision for Transfer Learning with TensorFlow
2
-
This tutorial guides you through the process of enabling auto-mixed precision to use low-precision datatypes, like bfloat16, for transfer learning with TensorFlow* (TF).
1
+
# `Enable Auto-Mixed Precision for Transfer Learning with TensorFlow*` Sample
3
2
4
-
This sample demonstrates the end-to-end pipeline tasks typically performed in a deep learning use-case: training (and retraining), inference optimization, and serving the model with TensorFlow Serving.
3
+
The `Enable Auto-Mixed Precision for Transfer Learning with TensorFlow*` sample guides you through the process of enabling auto-mixed precision to use low-precision datatypes, like bfloat16, for transfer learning with TensorFlow* (TF).
4
+
5
+
The sample demonstrates the end-to-end pipeline tasks typically performed in a deep learning use-case: training (and retraining), inference optimization, and serving the model with TensorFlow Serving.
5
6
6
7
| Area | Description
7
8
|:--- |:---
8
-
| What you will learn | Enable Auto-Mixed Precision for Transfer Learning with TensorFlow
9
+
| What you will learn | Enable Auto-Mixed Precision for Transfer Learning with TensorFlow*
9
10
| Time to complete | 30 minutes
11
+
| Category | Code Optimization
10
12
11
13
## Purpose
12
14
13
-
Through the implementation of end-to-end deep learning example, this sample demonstrates three important concepts:
14
-
1. The benefits of using auto-mixed precision to accelerate tasks like transfer learning, with minimal changes to existing scripts.
15
-
2. The importance of inference optimization on performance.
16
-
3. The ease of using Intel® optimizations in TensorFlow, which are enabled by default in 2.9.0 and newer.
15
+
Through the implementation of end-to-end deep learning example, this sample demonstrates important concepts:
16
+
- The benefits of using auto-mixed precision to accelerate tasks like transfer learning, with minimal changes to existing scripts.
17
+
- The importance of inference optimization on performance.
18
+
- The ease of using Intel® optimizations in TensorFlow, which are enabled by default in 2.9.0 and newer.
17
19
18
20
## Prerequisites
19
21
@@ -64,7 +66,10 @@ The sample tutorial contains one Jupyter Notebook and two Python scripts.
64
66
|`freeze_optimize_v2.py` |The script optimizes a pre-trained TensorFlow model PB file.
65
67
|`tf_benchmark.py` |The script measures inference performance of a model using dummy data.
66
68
67
-
## Run the Sample on Linux*
69
+
## Run the Enable Auto-Mixed Precision for Transfer Learning with TensorFlow*
70
+
71
+
### On Linux*
72
+
68
73
1. Launch Jupyter Notebook.
69
74
```
70
75
jupyter notebook --ip=0.0.0.0
@@ -78,7 +83,7 @@ The sample tutorial contains one Jupyter Notebook and two Python scripts.
78
83
5. Run every cell in the Notebook in sequence.
79
84
80
85
81
-
### Run the Sample on Intel® DevCloud
86
+
### Run the Sample on Intel® DevCloud (Optional)
82
87
83
88
1. If you do not already have an account, request an Intel® DevCloud account at [*Create an Intel® DevCloud Account*](https://intelsoftwaresites.secure.force.com/DevCloud/oneapi).
84
89
2. On a Linux* system, open a terminal.
@@ -102,7 +107,8 @@ If you receive an error message, troubleshoot the problem using the **Diagnostic
102
107
103
108
104
109
## Example Output
105
-
You will see diagrams that compare performance and analysis.
110
+
111
+
You will see diagrams comparing performance and analysis.
106
112
107
113
The following image illustrates performance comparison for training speedup obtained by enabling auto-mixed precision.
108
114
@@ -112,7 +118,6 @@ For performance analysis, you will see histograms showing different Tensorflow*
0 commit comments