Skip to content

Commit 37c1ef4

Browse files
jkinskyjimmytweikrzeszewalexsin368ZhaoqiongZ
authored
Ai and analytics features and functionality intel tensor flow enabling auto mixed precision for transfer learning (#1461)
* Fixes for 2023.1 AI Kit (#1409) * Intel Python Numpy Numba_dpes kNN sample (#1292) * *.py and *.ipynb files with implementation * README.md and sample.json files with documentation * License and thir party programs * Adding PyTorch Training Optimizations with AMX BF16 oneAPI sample (#1293) * add IntelPytorch Quantization code samples (#1301) * add IntelPytorch Quantization code samples * fix the spelling error in the README file * use john's README with grammar fix and title change * Rename third-party-grograms.txt to third-party-programs.txt Co-authored-by: Jimmy Wei <[email protected]> * AMX bfloat16 mixed precision learning TensorFlow Transformer sample (#1317) * [New Sample] Intel Extension for TensorFlow Getting Started (#1313) * first draft * Update README.md * remove redunant file * [New Sample] [oneDNN] Benchdnn tutorial (#1315) * New Sample: benchDNN tutorial * Update readme: new sample * Rename sample to benchdnn_tutorial * Name fix * Add files via upload (#1320) * [New Sample] oneCCL Bindings for PyTorch Getting Started (#1316) * Update README.md * [New Sample] oneCCL Bindings for PyTorch Getting Started * Update README.md * add torch-ccl version check * [New Sample] Intel Extension for PyTorch Getting Started (#1314) * add new ipex GSG notebook for dGPU * Update sample.json for expertise field * Update requirements.txt Update package versions to comply with Snyk tool * Updated title field in sample.json in TF Transformer AMX bfloat16 Mixed Precision sample to fit within character length range (#1327) * add arch checker class (#1332) * change gpu.patch to convert the code samples from cpu to gpu correctly (#1334) * Fixes for spelling in AMX bfloat16 transformer sample and printing error in python code in numpy vs numba sample (#1335) * 2023.1 ai kit itex get started example fix (#1338) * Fix the typo * Update ResNet50_Inference.ipynb * fix resnet inference demo link (#1339) * Fix printing issue in numpy vs numba AI sample (#1356) * Fix Invalid Kmeans parameters on oneAPI 2023 (#1345) * Update README to add new samples into the list (#1366) * PyTorch AMX BF16 Training sample: remove graphs and performance numbers (#1408) * Adding PyTorch Training Optimizations with AMX BF16 oneAPI sample * remove performance graphs, update README * remove graphs from README and folder * update top README in Features and Functionality --------- Co-authored-by: krzeszew <[email protected]> Co-authored-by: alexsin368 <[email protected]> Co-authored-by: ZhaoqiongZ <[email protected]> Co-authored-by: Louie Tsai <[email protected]> Co-authored-by: Orel Yehuda <[email protected]> Co-authored-by: yuning <[email protected]> Co-authored-by: Wang, Kai Lawrence <[email protected]> Co-authored-by: xiguiw <[email protected]> * Updated Enable Auto-Mixed Precision for Transfer Learning with TensorFlow readme Updated slightly since I had already restructured a version of this readme prior to original submission. Updated name in readme to match the sample.json. Corrected some branding and formatting. --------- Co-authored-by: Jimmy Wei <[email protected]> Co-authored-by: krzeszew <[email protected]> Co-authored-by: alexsin368 <[email protected]> Co-authored-by: ZhaoqiongZ <[email protected]> Co-authored-by: Louie Tsai <[email protected]> Co-authored-by: Orel Yehuda <[email protected]> Co-authored-by: yuning <[email protected]> Co-authored-by: Wang, Kai Lawrence <[email protected]> Co-authored-by: xiguiw <[email protected]>
1 parent 4191749 commit 37c1ef4

File tree

1 file changed

+17
-12
lines changed
  • AI-and-Analytics/Features-and-Functionality/IntelTensorFlow_Enabling_Auto_Mixed_Precision_for_TransferLearning

1 file changed

+17
-12
lines changed

AI-and-Analytics/Features-and-Functionality/IntelTensorFlow_Enabling_Auto_Mixed_Precision_for_TransferLearning/README.md

+17-12
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,21 @@
1-
# Enabling Auto-Mixed Precision for Transfer Learning with TensorFlow
2-
This tutorial guides you through the process of enabling auto-mixed precision to use low-precision datatypes, like bfloat16, for transfer learning with TensorFlow* (TF).
1+
# `Enable Auto-Mixed Precision for Transfer Learning with TensorFlow*` Sample
32

4-
This sample demonstrates the end-to-end pipeline tasks typically performed in a deep learning use-case: training (and retraining), inference optimization, and serving the model with TensorFlow Serving.
3+
The `Enable Auto-Mixed Precision for Transfer Learning with TensorFlow*` sample guides you through the process of enabling auto-mixed precision to use low-precision datatypes, like bfloat16, for transfer learning with TensorFlow* (TF).
4+
5+
The sample demonstrates the end-to-end pipeline tasks typically performed in a deep learning use-case: training (and retraining), inference optimization, and serving the model with TensorFlow Serving.
56

67
| Area | Description
78
|:--- |:---
8-
| What you will learn | Enable Auto-Mixed Precision for Transfer Learning with TensorFlow
9+
| What you will learn | Enable Auto-Mixed Precision for Transfer Learning with TensorFlow*
910
| Time to complete | 30 minutes
11+
| Category | Code Optimization
1012

1113
## Purpose
1214

13-
Through the implementation of end-to-end deep learning example, this sample demonstrates three important concepts:
14-
1. The benefits of using auto-mixed precision to accelerate tasks like transfer learning, with minimal changes to existing scripts.
15-
2. The importance of inference optimization on performance.
16-
3. The ease of using Intel® optimizations in TensorFlow, which are enabled by default in 2.9.0 and newer.
15+
Through the implementation of end-to-end deep learning example, this sample demonstrates important concepts:
16+
- The benefits of using auto-mixed precision to accelerate tasks like transfer learning, with minimal changes to existing scripts.
17+
- The importance of inference optimization on performance.
18+
- The ease of using Intel® optimizations in TensorFlow, which are enabled by default in 2.9.0 and newer.
1719

1820
## Prerequisites
1921

@@ -64,7 +66,10 @@ The sample tutorial contains one Jupyter Notebook and two Python scripts.
6466
|`freeze_optimize_v2.py` |The script optimizes a pre-trained TensorFlow model PB file.
6567
|`tf_benchmark.py` |The script measures inference performance of a model using dummy data.
6668

67-
## Run the Sample on Linux*
69+
## Run the Enable Auto-Mixed Precision for Transfer Learning with TensorFlow*
70+
71+
### On Linux*
72+
6873
1. Launch Jupyter Notebook.
6974
```
7075
jupyter notebook --ip=0.0.0.0
@@ -78,7 +83,7 @@ The sample tutorial contains one Jupyter Notebook and two Python scripts.
7883
5. Run every cell in the Notebook in sequence.
7984
8085
81-
### Run the Sample on Intel® DevCloud
86+
### Run the Sample on Intel® DevCloud (Optional)
8287
8388
1. If you do not already have an account, request an Intel® DevCloud account at [*Create an Intel® DevCloud Account*](https://intelsoftwaresites.secure.force.com/DevCloud/oneapi).
8489
2. On a Linux* system, open a terminal.
@@ -102,7 +107,8 @@ If you receive an error message, troubleshoot the problem using the **Diagnostic
102107
103108
104109
## Example Output
105-
You will see diagrams that compare performance and analysis.
110+
111+
You will see diagrams comparing performance and analysis.
106112
107113
The following image illustrates performance comparison for training speedup obtained by enabling auto-mixed precision.
108114
@@ -112,7 +118,6 @@ For performance analysis, you will see histograms showing different Tensorflow*
112118
113119
![Inference Speedup](images/inference-perf-comp.png)
114120
115-
116121
## License
117122
118123
Code samples are licensed under the MIT license. See

0 commit comments

Comments
 (0)