Skip to content

Commit b4c3231

Browse files
authored
[New Sample] Intel Extension for TensorFlow Getting Started (#1313)
* first draft * Update README.md * remove redunant file
1 parent 48264e5 commit b4c3231

File tree

6 files changed

+526
-0
lines changed

6 files changed

+526
-0
lines changed

AI-and-Analytics/Getting-Started-Samples/Intel_Extension_For_TensorFlow_GettingStarted/.gitkeep

Whitespace-only changes.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
Copyright Intel Corporation
2+
3+
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
4+
5+
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
6+
7+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,84 @@
1+
# Intel Extension for TensorFlow Getting Started Sample
2+
This code sample will guide users how to run a tensorflow inference workload on both GPU and CPU by using oneAPI AI Analytics Toolkit and also analyze the GPU and CPU usage via oneDNN verbose logs
3+
4+
## Purpose
5+
- Guide users how to use different conda environments in oneAPI AI Analytics Toolkit to run TensorFlow workloads on both CPU and GPU
6+
- Guide users how to validate the GPU or CPU usages for TensorFlow workloads on Intel CPU or GPU
7+
8+
9+
## Key implementation details
10+
1. leverage the [resnet50 inference sample] (https://github.com/intel/intel-extension-for-tensorflow/tree/main/examples/infer_resnet50) from intel-extension-for-tensorflow
11+
2. use the resnet50v1.5 pretrained model from TensorFlow Hub
12+
3. infernece with images in intel caffe github
13+
4. guide users how to use different conda environment to run on Intel CPU and GPU
14+
5. analyze oneDNN verbose logs to validate GPU or CPU usage
15+
16+
## License
17+
Code samples are licensed under the MIT license. See
18+
[License.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/License.txt) for details.
19+
20+
Third party program Licenses can be found here: [third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt)
21+
22+
## Running Samples on the Intel® DevCloud
23+
If you are running this sample on the DevCloud, skip the Pre-requirements and go to the [Activate Conda Environment](#activate-conda) section.
24+
25+
## Pre-requirements (Local or Remote Host Installation)
26+
27+
TensorFlow* is ready for use once you finish the Intel® AI Analytics Toolkit (AI Kit) installation and have run the post installation script.
28+
29+
You can refer to the oneAPI [main page](https://software.intel.com/en-us/oneapi) for toolkit installation and the Toolkit [Intel® AI Analytics Toolkit Get Started Guide for Linux](https://software.intel.com/en-us/get-started-with-intel-oneapi-linux-get-started-with-the-intel-ai-analytics-toolkit) for post-installation steps and scripts.
30+
31+
## Environment Setup
32+
This sample requires two additional pip packages: tensorflow_hub and ipykerenl.
33+
Therefore users need to clone the tensorflow conda environment into users' home folder and install those additional packages accordingly.
34+
Please follow bellow steps to setup GPU environment.
35+
36+
1. Source oneAPI environment variables: ```$source /opt/intel/oneapi/setvars.sh ```
37+
2. Create conda env: ```$conda create --name user-tensorflow-gpu --clone tensorflow-gpu ```
38+
3. Activate the created conda env: ```$source activate user-tensorflow-gpu ```
39+
4. Install the required packages: ```(user-tensorflow-gpu) $pip install tensorflow_hub ipykernel ```
40+
5. Deactivate conda env: ```(user-tensorflow-gpu)$conda deactivate ```
41+
6. Register the kernel to Jupyter NB: ``` $~/.conda/envs/user-tensorflowgpu/bin/python -m ipykernel install --user --name=user-tensorflow-gpu ```
42+
43+
Once users finish GPU environment setup, please do the same steps but remove "-gpu" from above steps.
44+
In the end, you will have two new conda environments which are user-tensorflow-gpu and user-tensorflow
45+
46+
## How to Build and Run
47+
48+
You can run the Jupyter notebook with the sample code on your local
49+
server or download the sample code from the notebook as a Python file and run it locally or on the Intel DevCloud.
50+
51+
**Note:** You can run this sample on the Intel DevCloud using the Dask and OmniSci engine backends for Modin. To learn how to set the engine backend for Intel Distribution of Modin, visit the [Intel® Distribution of Modin Getting Started Guide](https://www.intel.com/content/www/us/en/developer/articles/technical/intel-distribution-of-modin-getting-started-guide.html). The Ray backend cannot be used on Intel DevCloud at this time. Thank you for your patience.
52+
53+
### Run the Sample in Jupyter Notebook<a name="run-as-jupyter-notebook"></a>
54+
55+
To open the Jupyter notebook on your local server:
56+
57+
1. Start the Jupyter notebook server. ``` jupyter notebook --ip=0.0.0.0 ```
58+
59+
2. Open the ``ResNet50_Inference.ipynb`` file in the Notebook Dashboard.
60+
61+
3. Select the related jupyter kernel. In this example, select 'Kernel' -> 'Change kernel' -> user-tensorflow-gpu for GPU run as the first step.
62+
63+
4. Run the cells in the Jupyter notebook sequentially by clicking the **Run** button.
64+
65+
6. select user-tensorflow jupyter kernel and run again from beginning for CPU run.
66+
67+
68+
---
69+
**NOTE**
70+
71+
In the jupyter page, be sure to select the correct kernel. In this example, select 'Kernel' -> 'Change kernel' -> user-tensorflow-gpu or user-tensorflow.
72+
73+
---
74+
75+
76+
77+
78+
### Troubleshooting
79+
If an error occurs, troubleshoot the problem using the Diagnostics Utility for Intel® oneAPI Toolkits.
80+
[Learn more](https://software.intel.com/content/www/us/en/develop/documentation/diagnostic-utility-user-guide/top.html)
81+
82+
83+
84+
After learning how to use the extensions for Intel oneAPI Toolkits, return to this readme for instructions on how to build and run a sample.

0 commit comments

Comments
 (0)