You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Intel Python Numpy Numba_dpes kNN sample (#1292)
* *.py and *.ipynb files with implementation
* README.md and sample.json files with documentation
* License and thir party programs
* Adding PyTorch Training Optimizations with AMX BF16 oneAPI sample (#1293)
* add IntelPytorch Quantization code samples (#1301)
* add IntelPytorch Quantization code samples
* fix the spelling error in the README file
* use john's README with grammar fix and title change
* Rename third-party-grograms.txt to third-party-programs.txt
Co-authored-by: Jimmy Wei <[email protected]>
* AMX bfloat16 mixed precision learning TensorFlow Transformer sample (#1317)
* [New Sample] Intel Extension for TensorFlow Getting Started (#1313)
* first draft
* Update README.md
* remove redunant file
* [New Sample] [oneDNN] Benchdnn tutorial (#1315)
* New Sample: benchDNN tutorial
* Update readme: new sample
* Rename sample to benchdnn_tutorial
* Name fix
* Add files via upload (#1320)
* [New Sample] oneCCL Bindings for PyTorch Getting Started (#1316)
* Update README.md
* [New Sample] oneCCL Bindings for PyTorch Getting Started
* Update README.md
* add torch-ccl version check
* [New Sample] Intel Extension for PyTorch Getting Started (#1314)
* add new ipex GSG notebook for dGPU
* Update sample.json
for expertise field
* Update requirements.txt
Update package versions to comply with Snyk tool
* Updated title field in sample.json in TF Transformer AMX bfloat16 Mixed Precision sample to fit within character length range (#1327)
* add arch checker class (#1332)
* change gpu.patch to convert the code samples from cpu to gpu correctly (#1334)
* Fixes for spelling in AMX bfloat16 transformer sample and printing error in python code in numpy vs numba sample (#1335)
* 2023.1 ai kit itex get started example fix (#1338)
* Fix the typo
* Update ResNet50_Inference.ipynb
* fix resnet inference demo link (#1339)
* Fix printing issue in numpy vs numba AI sample (#1356)
* Fix Invalid Kmeans parameters on oneAPI 2023 (#1345)
* Update README to add new samples into the list (#1366)
---------
Co-authored-by: krzeszew <[email protected]>
Co-authored-by: alexsin368 <[email protected]>
Co-authored-by: ZhaoqiongZ <[email protected]>
Co-authored-by: Louie Tsai <[email protected]>
Co-authored-by: Orel Yehuda <[email protected]>
Co-authored-by: yuning <[email protected]>
Co-authored-by: Wang, Kai Lawrence <[email protected]>
Co-authored-by: xiguiw <[email protected]>
| Scikit-learn | [IntelScikitLearn_Extensions_SVC_Adult](IntelScikitLearn_Extensions_SVC_Adult) | Use Intel® Extension for Scikit-learn to accelerate the training and prediction with SVC algorithm on Adult dataset. Compare the performance of SVC algorithm optimized through Intel® Extension for Scikit-learn against original Scikit-learn.
20
-
| daal4py | [IntelPython_daal4py_DistributedLinearRegression](IntelPython_daal4py_DistributedLinearRegression) | Run a distributed Linear Regression model with oneAPI Data Analytics Library (oneDAL) daal4py library memory objects.
21
-
| PyTorch | [IntelPyTorch_Extensions_AutoMixedPrecision](IntelPyTorch_Extensions_AutoMixedPrecision) | Download, compile, and get started with Intel® Extension for PyTorch*.
22
-
| PyTorch | [IntelPyTorch_TrainingOptimizations_AMX_BF16](IntelPyTorch_TrainingOptimizations_AMX_BF16) | Analyze training performance improvements using Intel® Extension for PyTorch with Advanced Matrix Extensions Bfloat16.
23
-
| PyTorch | [IntelPyTorch_TorchCCL_Multinode_Training](IntelPyTorch_TorchCCL_Multinode_Training) | Perform distributed training with oneAPI Collective Communications Library (oneCCL) in PyTorch.
24
-
| TensorFlow & Model Zoo | [IntelTensorFlow_ModelZoo_Inference_with_FP32_Int8](IntelTensorFlow_ModelZoo_Inference_with_FP32_Int8) | Run ResNet50 inference on Intel's pretrained FP32 and Int8 model.
25
-
| TensorFlow & Model Zoo | [IntelTensorFlow_PerformanceAnalysis](IntelTensorFlow_PerformanceAnalysis) | Analyze the performance difference between Stock Tensorflow and Intel Tensorflow.
26
-
| TensorFlow | [IntelTensorFlow_InferenceOptimization](IntelTensorFlow_InferenceOptimization) | Optimize a pre-trained model for a better inference performance.
27
-
| XGBoost | [IntelPython_XGBoost_Performance](IntelPython_XGBoost_Performance) | Analyze the performance benefit from using Intel optimized XGBoost compared to un-optimized XGBoost 0.81.
28
-
| XGBoost | [IntelPython_XGBoost_daal4pyPrediction](IntelPython_XGBoost_daal4pyPrediction) | Analyze the performance benefit of minimal code changes to port pre-trained XGBoost model to daal4py prediction for much faster prediction than XGBoost prediction..
19
+
| PyTorch | [IntelPyTorch Extensions Inference Optimization](IntelPyTorch_Extensions_Inference_Optimization) | Applying IPEX Optimizations to a PyTorch workload to gain performance boost.
20
+
| PyTorch | [IntelPyTorch TrainingOptimizations AMX BF16](IntelPyTorch_TrainingOptimizations_AMX_BF16) | Analyze training performance improvements using Intel® Extension for PyTorch with Advanced Matrix Extensions Bfloat16.
21
+
| Numpy, Numba | [IntelPython Numpy Numba dpex kNN](IntelPython_Numpy_Numba_dpex_kNN) | Optimize k-NN model by numba_dpex operations without sacrificing accuracy.
22
+
| XGBoost | [IntelPython XGBoost Performance](IntelPython_XGBoost_Performance) | Analyze the performance benefit from using Intel optimized XGBoost compared to un-optimized XGBoost 0.81.
23
+
| XGBoost | [IntelPython XGBoost daal4pyPrediction](IntelPython_XGBoost_daal4pyPrediction) | Analyze the performance benefit of minimal code changes to port pre-trained XGBoost model to daal4py prediction for much faster prediction than XGBoost prediction.
24
+
| daal4py | [IntelPython daal4py DistributedKMeans](IntelPython_daal4py_DistributedKMeans) | train and predict with a distributed k-means model using the python API package daal4py powered by the oneAPI Data Analytics Library.
25
+
| daal4py | [IntelPython daal4py DistributedLinearRegression](IntelPython_daal4py_DistributedLinearRegression) | Run a distributed Linear Regression model with oneAPI Data Analytics Library (oneDAL) daal4py library memory objects.
26
+
| PyTorch | [IntelPytorch Quantization](IntelPytorch_Quantization) | Inference performance improvements using Intel® Extension for PyTorch* (IPEX) with feature quantization.
27
+
| TensorFlow | [IntelTensorFlow AMX BF16 Training](IntelTensorFlow_AMX_BF16_Training) | Training performance improvements with Intel® AMX BF16.
28
+
| TensorFlow | [IntelTensorFlow Enabling Auto Mixed Precision for TransferLearning](IntelTensorFlow_Enabling_Auto_Mixed_Precision_for_TransferLearning) | Enabling auto-mixed precision to use low-precision datatypes, like bfloat16, for transfer learning with TensorFlow*.
29
+
| TensorFlow | [IntelTensorFlow InferenceOptimization](IntelTensorFlow_InferenceOptimization) | Optimize a pre-trained model for a better inference performance.
30
+
| TensorFlow & Model Zoo | [IntelTensorFlow ModelZoo Inference with FP32 Int8](IntelTensorFlow_ModelZoo_Inference_with_FP32_Int8) | Run ResNet50 inference on Intel's pretrained FP32 and Int8 model.
31
+
| TensorFlow & Model Zoo | [IntelTensorFlow PerformanceAnalysis](IntelTensorFlow_PerformanceAnalysis) | Analyze the performance difference between Stock Tensorflow and Intel Tensorflow.
32
+
| TensorFlow | [IntelTensorFlow Transformer AMX bfloat16 MixedPrecisiong](IntelTensorFlow_Transformer_AMX_bfloat16_MixedPrecision) | Run a transformer classification model with bfloat16 mixed precision.
33
+
| Scikit-learn | [IntelScikitLearn Extensions SVC Adult](IntelScikitLearn_Extensions_SVC_Adult) | Use Intel® Extension for Scikit-learn to accelerate the training and prediction with SVC algorithm on Adult dataset. Compare the performance of SVC algorithm optimized through Intel® Extension for Scikit-learn against original Scikit-learn..
29
34
30
35
# Using Samples in Intel® DevCloud for oneAPI
31
36
To get started using samples in the DevCloud, refer to [Using AI samples in Intel® DevCloud for oneAPI](https://github.com/intel-ai-tce/oneAPI-samples/tree/devcloud/AI-and-Analytics#using-samples-in-intel-oneapi-devcloud).
0 commit comments