Skip to content

Fixes for AMX bfloat16 transformer and numba vs numpy samples #1335

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -217,7 +217,7 @@ def knn_numba(X_train, y_train, X_test, k):
predictions = knn_numba(X_train.values, y_train.values, X_test.values, 3)
true_values = y_test.to_numpy()
accuracy = np.mean(predictions == true_values)
print('Numba accuracy:' + accuracy)
print('Numba accuracy:', accuracy)


# ## Numba_dpex k-NN
Expand Down Expand Up @@ -335,7 +335,7 @@ def knn_numba_dpex(train, train_labels, test, k, predictions, votes_to_classes_l

true_values = y_test.to_numpy()
accuracy = np.mean(predictions_numba == true_values)
print('Numba_dpex accuracy:' + accuracy)
print('Numba_dpex accuracy:', accuracy)


# In[ ]:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
"id": "1bdd8c5b-209e-4a7d-9d98-2b552c987039",
"metadata": {},
"source": [
"# Tensorflow Transformer with AMX bfoat16 Mixed Precision Learning\n",
"# Tensorflow Transformer with AMX bfloat16 Mixed Precision Learning\n",
"\n",
"In this example we will be learning Transformer block for text classification using **IMBD dataset**. And then we will modify the code to use mixed precision learning with **bfloat16**. The example based on the [Text classification with Transformer Keras code example](https://keras.io/examples/nlp/text_classification_with_transformer/).\n",
"\n",
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# `TensorFlow (TF) Transformer with Intel® Advanced Matrix Extensions (Intel® AMX) bfoat16 Mixed Precision Learning`
# `TensorFlow (TF) Transformer with Intel® Advanced Matrix Extensions (Intel® AMX) bfloat16 Mixed Precision Learning`

This sample code demonstrates optimizing a TensorFlow model with Intel® Advanced Matrix Extensions (Intel® AMX) using bfloat16 (Brain Floating Point) on 4th Gen Intel® Xeon® Scalable Processors (Sapphire Rapids).

Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"guid": "60A68888-6099-414E-999B-EDC7310A01EA",
"name": "TensorFlow Transformer with Advanced Matrix Extensions bfoat16 Mixed Precision Learning",
"name": "TensorFlow Transformer with Advanced Matrix Extensions bfloat16 Mixed Precision Learning",
"categories": ["Toolkit/oneAPI AI And Analytics/AI Getting Started Samples"],
"description": "This sample code demonstrates optimizing a TensorFlow model with Intel® Advanced Matrix Extensions (Intel® AMX) using bfloat16 (Brain Floating Point) on Sapphire Rapids",
"builder": ["cli"],
Expand Down