You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Install the Intel GPU Driver in the building server, which is needed to build with GPU support and AOT ([Ahead-of-time compilation](https://www.intel.com/content/www/us/en/docs/dpcpp-cpp-compiler/developer-guide-reference/2024-1/ahead-of-time-compilation.html)).
131
+
Install the Intel GPU Driver in the building server, which is needed to build with GPU support and AOT ([Ahead-of-time compilation](https://www.intel.com/content/www/us/en/docs/dpcpp-cpp-compiler/developer-guide-reference/2025-0/ahead-of-time-compilation.html)).
132
132
133
133
Refer to [Install Intel GPU driver](install_for_xpu.md/#install-gpu-drivers) for details.
134
134
135
135
Note:
136
136
137
137
1. Make sure to [install developer runtime packages](https://dgpu-docs.intel.com/installation-guides/ubuntu/ubuntu-jammy-dc.html#optional-install-developer-packages) before building Intel® Extension for TensorFlow*.
AOT is a compiling option that reduces the initialization time of GPU kernels at startup time by creating the binary code for a specified hardware platform during compiling. AOT will make the installation package larger but improve performance time.
142
142
143
143
Without AOT, Intel® Extension for TensorFlow* will be translated to binary code for local hardware platform during startup. That will prolong startup time when using a GPU to several minutes or more.
144
144
145
-
For more information, refer to [Use AOT for Integrated Graphics (Intel GPU)](https://www.intel.com/content/www/us/en/docs/dpcpp-cpp-compiler/developer-guide-reference/2024-1/ahead-of-time-compilation.html).
145
+
For more information, refer to [Use AOT for Integrated Graphics (Intel GPU)](https://www.intel.com/content/www/us/en/docs/dpcpp-cpp-compiler/developer-guide-reference/2025-0/ahead-of-time-compilation.html).
146
146
147
147
#### Install oneAPI Base Toolkit
148
148
@@ -195,19 +195,19 @@ $ ./configure
195
195
196
196
Default is '', which means no AOT.
197
197
198
-
Fill one or more device type strings of special hardware platforms, such as `ats-m150`, `acm-g11`.
198
+
Fill one or more compilation device name strings of special hardware platforms, such as `ats-m150`, `acm-g11`.
199
199
200
200
Here is the list of GPUs we've verified:
201
201
202
-
|GPU|device type|
202
+
|GPU|AOT Compilation Device Name|
203
203
|-|-|
204
204
|Intel® Data Center GPU Flex Series 170|ats-m150|
205
205
|Intel® Data Center GPU Flex Series 140|ats-m75|
206
206
|Intel® Data Center GPU Max Series|pvc|
207
207
|Intel® Arc™ A730M|acm-g10|
208
208
|Intel® Arc™ A380|acm-g11|
209
209
210
-
Please refer to the `Available GPU Platforms` section in the end of the [Ahead of Time Compilation](https://www.intel.com/content/www/us/en/docs/dpcpp-cpp-compiler/developer-guide-reference/2024-1/ahead-of-time-compilation.html) document for more device types or create an [issue](https://github.com/intel/intel-extension-for-tensorflow/issues) to ask support.
210
+
Please refer to the `Available GPU Platforms` section in the end of the [Ahead of Time Compilation](https://www.intel.com/content/www/us/en/docs/dpcpp-cpp-compiler/developer-guide-reference/2025-0/ahead-of-time-compilation.html) document for more AOT compilation device names or create an [issue](https://github.com/intel/intel-extension-for-tensorflow/issues) to ask support.
211
211
212
212
To get the full list of supported device types, use the OpenCL™ Offline Compiler (OCLOC) tool (which is installed as part of the GPU driver), and run the following command, please look for `-device <device_type>` field of the output:
213
213
```bash
@@ -346,4 +346,4 @@ Preconfigured Bazel build configs. You can use any of the below by adding "--con
346
346
NOTE: XPU mode which supports both CPU and GPU is disbaled."--config=xpu" only supports GPU, which is same as "--config=gpu"
Install the Intel GPU Driver in the building server, which is needed to build with GPU support and AOT ([Ahead-of-time compilation](https://www.intel.com/content/www/us/en/docs/dpcpp-cpp-compiler/developer-guide-reference/2024-1/ahead-of-time-compilation.html)).
71
+
Install the Intel GPU Driver in the building server, which is needed to build with GPU support and AOT ([Ahead-of-time compilation](https://www.intel.com/content/www/us/en/docs/dpcpp-cpp-compiler/developer-guide-reference/2025-0/ahead-of-time-compilation.html)).
72
72
73
73
Refer to [Install Intel GPU driver](install_for_xpu.md/#install-gpu-drivers) for details.
74
74
75
75
Note:
76
76
77
77
1. Make sure to [install developer runtime packages](https://dgpu-docs.intel.com/installation-guides/ubuntu/ubuntu-jammy-dc.html#optional-install-developer-packages) before building Intel® Extension for TensorFlow*.
AOT is a compiling option that reduces the initialization time of GPU kernels at startup time by creating the binary code for a specified hardware platform during compiling. AOT will make the installation package larger but improve performance time.
82
82
83
83
Without AOT, Intel® Extension for TensorFlow* will be translated to binary code for local hardware platform during startup. That will prolong startup time when using a GPU to several minutes or more.
84
84
85
-
For more information, refer to [Use AOT for Integrated Graphics (Intel GPU)](https://www.intel.com/content/www/us/en/docs/dpcpp-cpp-compiler/developer-guide-reference/2024-1/ahead-of-time-compilation.html).
85
+
For more information, refer to [Use AOT for Integrated Graphics (Intel GPU)](https://www.intel.com/content/www/us/en/docs/dpcpp-cpp-compiler/developer-guide-reference/2025-0/ahead-of-time-compilation.html).
86
86
87
87
#### Install oneAPI Base Toolkit
88
88
@@ -136,19 +136,19 @@ $ ./configure
136
136
137
137
Default is '', which means no AOT.
138
138
139
-
Fill one or more device type strings of special hardware platforms, such as `ats-m150`, `acm-g11`.
139
+
Fill one or more compilation device name strings of special hardware platforms, such as `ats-m150`, `acm-g11`.
140
140
141
141
Here is the list of GPUs we've verified:
142
142
143
-
|GPU|device type|
143
+
|GPU|AOT Compilation Device Name|
144
144
|-|-|
145
145
|Intel® Data Center GPU Flex Series 170|ats-m150|
146
146
|Intel® Data Center GPU Flex Series 140|ats-m75|
147
147
|Intel® Data Center GPU Max Series|pvc|
148
148
|Intel® Arc™ A730M|acm-g10|
149
149
|Intel® Arc™ A380|acm-g11|
150
150
151
-
Please refer to the `Available GPU Platforms` section in the end of the [Ahead of Time Compilation](https://www.intel.com/content/www/us/en/docs/dpcpp-cpp-compiler/developer-guide-reference/2024-1/ahead-of-time-compilation.html) document for more device types or create an [issue](https://github.com/intel/intel-extension-for-tensorflow/issues) to ask support.
151
+
Please refer to the `Available GPU Platforms` section in the end of the [Ahead of Time Compilation](https://www.intel.com/content/www/us/en/docs/dpcpp-cpp-compiler/developer-guide-reference/2025-0/ahead-of-time-compilation.html) document for more AOT compilation device names or create an [issue](https://github.com/intel/intel-extension-for-tensorflow/issues) to ask support.
152
152
153
153
To get the full list of supported device types, use the OpenCL™ Offline Compiler (OCLOC) tool (which is installed as part of the GPU driver), and run the following command, please look for `-device <device_type>` field of the output:
0 commit comments