Skip to content

2023.1 ai kit itex get started example fix #1338

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Feb 7, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ Please follow bellow steps to setup GPU environment.
3. Activate the created conda env: ```$source activate user-tensorflow-gpu ```
4. Install the required packages: ```(user-tensorflow-gpu) $pip install tensorflow_hub ipykernel ```
5. Deactivate conda env: ```(user-tensorflow-gpu)$conda deactivate ```
6. Register the kernel to Jupyter NB: ``` $~/.conda/envs/user-tensorflowgpu/bin/python -m ipykernel install --user --name=user-tensorflow-gpu ```
6. Register the kernel to Jupyter NB: ``` $~/.conda/envs/user-tensorflow-gpu/bin/python -m ipykernel install --user --name=user-tensorflow-gpu ```

Once users finish GPU environment setup, please do the same steps but remove "-gpu" from above steps.
In the end, you will have two new conda environments which are user-tensorflow-gpu and user-tensorflow
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@
"metadata": {},
"outputs": [],
"source": [
"%env ONEAPI_INSTALL=~/intel/oneapi"
"%env ONEAPI_INSTALL=/opt/intel/oneapi"
]
},
{
Expand Down Expand Up @@ -124,7 +124,7 @@
"%%writefile run.sh\n",
"#!/bin/bash\n",
"source $ONEAPI_INSTALL/setvars.sh --force > /dev/null 2>&1\n",
"source activate tensorflow-gpu\n",
"source activate user-tensorflow-gpu\n",
"echo \"########## Executing the run\"\n",
"DNNL_VERBOSE=1 python infer_resnet50.py > infer_rn50_gpu.csv\n",
"echo \"########## Done with the run\""
Expand Down Expand Up @@ -156,7 +156,7 @@
"metadata": {},
"source": [
"#### Run on CPU via Intel TensorFlow\n",
"Users also can run the same infer_resnet50.py on CPU with intel tensorflow or stock tensorflow."
"Users also can run the same infer_resnet50.py on CPU with intel tensorflow or stock tensorflow. Please switch to the user-tensorflow jupyter kernel and execute again from prerequisites for CPU run"
]
},
{
Expand All @@ -168,7 +168,7 @@
"%%writefile run.sh\n",
"#!/bin/bash\n",
"source $ONEAPI_INSTALL/setvars.sh --force > /dev/null 2>&1\n",
"source activate tensorflow\n",
"source activate user-tensorflow\n",
"echo \"########## Executing the run\"\n",
"DNNL_VERBOSE=1 python infer_resnet50.py > infer_rn50_cpu.csv\n",
"echo \"########## Done with the run\""
Expand Down Expand Up @@ -269,7 +269,7 @@
"metadata": {},
"outputs": [],
"source": [
"FdIndex=2"
"FdIndex=0"
]
},
{
Expand Down Expand Up @@ -325,7 +325,7 @@
"metadata": {},
"outputs": [],
"source": [
"onednn.breakdown(data,\"arch\",\"time\")"
"onednn.breakdown(exec_data,\"arch\",\"time\")"
]
},
{
Expand Down Expand Up @@ -382,7 +382,6 @@
"metadata": {},
"outputs": [],
"source": [
"os.chdir(initial_cwd)\n",
"print('[CODE_SAMPLE_COMPLETED_SUCCESFULLY]')"
]
}
Expand Down