You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| Scikit-learn | [IntelScikitLearn_Extensions_SVC_Adult](IntelScikitLearn_Extensions_SVC_Adult) | Use Intel® Extension for Scikit-learn to accelerate the training and prediction with SVC algorithm on Adult dataset. Compare the performance of SVC algorithm optimized through Intel® Extension for Scikit-learn against original Scikit-learn.
20
-
| daal4py | [IntelPython_daal4py_DistributedLinearRegression](IntelPython_daal4py_DistributedLinearRegression) | Run a distributed Linear Regression model with oneAPI Data Analytics Library (oneDAL) daal4py library memory objects.
21
-
| PyTorch | [IntelPyTorch_Extensions_AutoMixedPrecision](IntelPyTorch_Extensions_AutoMixedPrecision) | Download, compile, and get started with Intel® Extension for PyTorch*.
22
-
| PyTorch | [IntelPyTorch_TrainingOptimizations_AMX_BF16](IntelPyTorch_TrainingOptimizations_AMX_BF16) | Analyze training performance improvements using Intel® Extension for PyTorch with Advanced Matrix Extensions Bfloat16.
23
-
| PyTorch | [IntelPyTorch_TorchCCL_Multinode_Training](IntelPyTorch_TorchCCL_Multinode_Training) | Perform distributed training with oneAPI Collective Communications Library (oneCCL) in PyTorch.
24
-
| TensorFlow & Model Zoo | [IntelTensorFlow_ModelZoo_Inference_with_FP32_Int8](IntelTensorFlow_ModelZoo_Inference_with_FP32_Int8) | Run ResNet50 inference on Intel's pretrained FP32 and Int8 model.
25
-
| TensorFlow & Model Zoo | [IntelTensorFlow_PerformanceAnalysis](IntelTensorFlow_PerformanceAnalysis) | Analyze the performance difference between Stock Tensorflow and Intel Tensorflow.
26
-
| TensorFlow | [IntelTensorFlow_InferenceOptimization](IntelTensorFlow_InferenceOptimization) | Optimize a pre-trained model for a better inference performance.
27
-
| XGBoost | [IntelPython_XGBoost_Performance](IntelPython_XGBoost_Performance) | Analyze the performance benefit from using Intel optimized XGBoost compared to un-optimized XGBoost 0.81.
28
-
| XGBoost | [IntelPython_XGBoost_daal4pyPrediction](IntelPython_XGBoost_daal4pyPrediction) | Analyze the performance benefit of minimal code changes to port pre-trained XGBoost model to daal4py prediction for much faster prediction than XGBoost prediction..
19
+
| PyTorch | [IntelPyTorch Extensions Inference Optimization](IntelPyTorch_Extensions_Inference_Optimization) | Applying IPEX Optimizations to a PyTorch workload to gain performance boost.
20
+
| PyTorch | [IntelPyTorch TrainingOptimizations AMX BF16](IntelPyTorch_TrainingOptimizations_AMX_BF16) | Analyze training performance improvements using Intel® Extension for PyTorch with Advanced Matrix Extensions Bfloat16.
21
+
| Numpy, Numba | [IntelPython Numpy Numba dpex kNN](IntelPython_Numpy_Numba_dpex_kNN) | Optimize k-NN model by numba_dpex operations without sacrificing accuracy.
22
+
| XGBoost | [IntelPython XGBoost Performance](IntelPython_XGBoost_Performance) | Analyze the performance benefit from using Intel optimized XGBoost compared to un-optimized XGBoost 0.81.
23
+
| XGBoost | [IntelPython XGBoost daal4pyPrediction](IntelPython_XGBoost_daal4pyPrediction) | Analyze the performance benefit of minimal code changes to port pre-trained XGBoost model to daal4py prediction for much faster prediction than XGBoost prediction.
24
+
| daal4py | [IntelPython daal4py DistributedKMeans](IntelPython_daal4py_DistributedKMeans) | train and predict with a distributed k-means model using the python API package daal4py powered by the oneAPI Data Analytics Library.
25
+
| daal4py | [IntelPython daal4py DistributedLinearRegression](IntelPython_daal4py_DistributedLinearRegression) | Run a distributed Linear Regression model with oneAPI Data Analytics Library (oneDAL) daal4py library memory objects.
26
+
| PyTorch | [IntelPytorch Quantization](IntelPytorch_Quantization) | Inference performance improvements using Intel® Extension for PyTorch* (IPEX) with feature quantization.
27
+
| TensorFlow | [IntelTensorFlow AMX BF16 Training](IntelTensorFlow_AMX_BF16_Training) | Training performance improvements with Intel® AMX BF16.
28
+
| TensorFlow | [IntelTensorFlow Enabling Auto Mixed Precision for TransferLearning](IntelTensorFlow_Enabling_Auto_Mixed_Precision_for_TransferLearning) | Enabling auto-mixed precision to use low-precision datatypes, like bfloat16, for transfer learning with TensorFlow*.
29
+
| TensorFlow | [IntelTensorFlow InferenceOptimization](IntelTensorFlow_InferenceOptimization) | Optimize a pre-trained model for a better inference performance.
30
+
| TensorFlow & Model Zoo | [IntelTensorFlow ModelZoo Inference with FP32 Int8](IntelTensorFlow_ModelZoo_Inference_with_FP32_Int8) | Run ResNet50 inference on Intel's pretrained FP32 and Int8 model.
31
+
| TensorFlow & Model Zoo | [IntelTensorFlow PerformanceAnalysis](IntelTensorFlow_PerformanceAnalysis) | Analyze the performance difference between Stock Tensorflow and Intel Tensorflow.
32
+
| TensorFlow | [IntelTensorFlow Transformer AMX bfloat16 MixedPrecisiong](IntelTensorFlow_Transformer_AMX_bfloat16_MixedPrecision) | Run a transformer classification model with bfloat16 mixed precision.
33
+
| Scikit-learn | [IntelScikitLearn Extensions SVC Adult](IntelScikitLearn_Extensions_SVC_Adult) | Use Intel® Extension for Scikit-learn to accelerate the training and prediction with SVC algorithm on Adult dataset. Compare the performance of SVC algorithm optimized through Intel® Extension for Scikit-learn against original Scikit-learn..
29
34
30
35
# Using Samples in Intel® DevCloud for oneAPI
31
36
To get started using samples in the DevCloud, refer to [Using AI samples in Intel® DevCloud for oneAPI](https://github.com/intel-ai-tce/oneAPI-samples/tree/devcloud/AI-and-Analytics#using-samples-in-intel-oneapi-devcloud).
0 commit comments