Skip to content

2023.1 AI Kit README Updates #1379

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 23 commits into from
Feb 24, 2023
Merged
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
d0b3299
Intel Python Numpy Numba_dpes kNN sample (#1292)
krzeszew Jan 12, 2023
303ab84
Adding PyTorch Training Optimizations with AMX BF16 oneAPI sample (#1…
alexsin368 Jan 13, 2023
d5aea45
add IntelPytorch Quantization code samples (#1301)
ZhaoqiongZ Jan 19, 2023
48264e5
AMX bfloat16 mixed precision learning TensorFlow Transformer sample (…
krzeszew Jan 26, 2023
b4c3231
[New Sample] Intel Extension for TensorFlow Getting Started (#1313)
louie-tsai Jan 26, 2023
799b80c
[New Sample] [oneDNN] Benchdnn tutorial (#1315)
yehudaorel Jan 26, 2023
80823d7
Add files via upload (#1320)
YuningQiu Jan 26, 2023
23f50dc
[New Sample] oneCCL Bindings for PyTorch Getting Started (#1316)
louie-tsai Jan 26, 2023
5bca91c
[New Sample] Intel Extension for PyTorch Getting Started (#1314)
louie-tsai Jan 26, 2023
0d582e8
Update requirements.txt
jimmytwei Jan 27, 2023
5546046
Updated title field in sample.json in TF Transformer AMX bfloat16 Mix…
jimmytwei Jan 27, 2023
cadee2e
Merge branch 'master' into 2023.1_AIKit
jimmytwei Jan 27, 2023
0240431
Merge branch 'master' into 2023.1_AIKit
jimmytwei Feb 6, 2023
0c9027b
add arch checker class (#1332)
louie-tsai Feb 6, 2023
2bbf2f0
change gpu.patch to convert the code samples from cpu to gpu correctl…
ZhaoqiongZ Feb 6, 2023
f661725
Fixes for spelling in AMX bfloat16 transformer sample and printing er…
krzeszew Feb 6, 2023
f8136d7
2023.1 ai kit itex get started example fix (#1338)
wangkl2 Feb 7, 2023
fe90d1e
fix resnet inference demo link (#1339)
ZhaoqiongZ Feb 7, 2023
ce40db3
Merge branch 'master' into 2023.1_AIKit
jimmytwei Feb 13, 2023
4356c8d
Fix printing issue in numpy vs numba AI sample (#1356)
krzeszew Feb 15, 2023
219b406
Fix Invalid Kmeans parameters on oneAPI 2023 (#1345)
xiguiw Feb 16, 2023
3a3c982
Merge branch 'master' into 2023.1_AIKit
jimmytwei Feb 16, 2023
628d43c
Update README to add new samples into the list (#1366)
louie-tsai Feb 22, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 15 additions & 10 deletions AI-and-Analytics/Features-and-Functionality/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,16 +16,21 @@ Third party program Licenses can be found here: [third-party-programs.txt](https

| Compoment | Folder | Description
| --------- | ------------------------------------------------ | -
| Scikit-learn | [IntelScikitLearn_Extensions_SVC_Adult](IntelScikitLearn_Extensions_SVC_Adult) | Use Intel® Extension for Scikit-learn to accelerate the training and prediction with SVC algorithm on Adult dataset. Compare the performance of SVC algorithm optimized through Intel® Extension for Scikit-learn against original Scikit-learn.
| daal4py | [IntelPython_daal4py_DistributedLinearRegression](IntelPython_daal4py_DistributedLinearRegression) | Run a distributed Linear Regression model with oneAPI Data Analytics Library (oneDAL) daal4py library memory objects.
| PyTorch | [IntelPyTorch_Extensions_AutoMixedPrecision](IntelPyTorch_Extensions_AutoMixedPrecision) | Download, compile, and get started with Intel® Extension for PyTorch*.
| PyTorch | [IntelPyTorch_TrainingOptimizations_AMX_BF16](IntelPyTorch_TrainingOptimizations_AMX_BF16) | Analyze training performance improvements using Intel® Extension for PyTorch with Advanced Matrix Extensions Bfloat16.
| PyTorch | [IntelPyTorch_TorchCCL_Multinode_Training](IntelPyTorch_TorchCCL_Multinode_Training) | Perform distributed training with oneAPI Collective Communications Library (oneCCL) in PyTorch.
| TensorFlow & Model Zoo | [IntelTensorFlow_ModelZoo_Inference_with_FP32_Int8](IntelTensorFlow_ModelZoo_Inference_with_FP32_Int8) | Run ResNet50 inference on Intel's pretrained FP32 and Int8 model.
| TensorFlow & Model Zoo | [IntelTensorFlow_PerformanceAnalysis](IntelTensorFlow_PerformanceAnalysis) | Analyze the performance difference between Stock Tensorflow and Intel Tensorflow.
| TensorFlow | [IntelTensorFlow_InferenceOptimization](IntelTensorFlow_InferenceOptimization) | Optimize a pre-trained model for a better inference performance.
| XGBoost | [IntelPython_XGBoost_Performance](IntelPython_XGBoost_Performance) | Analyze the performance benefit from using Intel optimized XGBoost compared to un-optimized XGBoost 0.81.
| XGBoost | [IntelPython_XGBoost_daal4pyPrediction](IntelPython_XGBoost_daal4pyPrediction) | Analyze the performance benefit of minimal code changes to port pre-trained XGBoost model to daal4py prediction for much faster prediction than XGBoost prediction..
| PyTorch | [IntelPyTorch Extensions Inference Optimization](IntelPyTorch_Extensions_Inference_Optimization) | Applying IPEX Optimizations to a PyTorch workload to gain performance boost.
| PyTorch | [IntelPyTorch TrainingOptimizations AMX BF16](IntelPyTorch_TrainingOptimizations_AMX_BF16) | Analyze training performance improvements using Intel® Extension for PyTorch with Advanced Matrix Extensions Bfloat16.
| Numpy, Numba | [IntelPython Numpy Numba dpex kNN](IntelPython_Numpy_Numba_dpex_kNN) | Optimize k-NN model by numba_dpex operations without sacrificing accuracy.
| XGBoost | [IntelPython XGBoost Performance](IntelPython_XGBoost_Performance) | Analyze the performance benefit from using Intel optimized XGBoost compared to un-optimized XGBoost 0.81.
| XGBoost | [IntelPython XGBoost daal4pyPrediction](IntelPython_XGBoost_daal4pyPrediction) | Analyze the performance benefit of minimal code changes to port pre-trained XGBoost model to daal4py prediction for much faster prediction than XGBoost prediction.
| daal4py | [IntelPython daal4py DistributedKMeans](IntelPython_daal4py_DistributedKMeans) | train and predict with a distributed k-means model using the python API package daal4py powered by the oneAPI Data Analytics Library.
| daal4py | [IntelPython daal4py DistributedLinearRegression](IntelPython_daal4py_DistributedLinearRegression) | Run a distributed Linear Regression model with oneAPI Data Analytics Library (oneDAL) daal4py library memory objects.
| PyTorch | [IntelPytorch Quantization](IntelPytorch_Quantization) | Inference performance improvements using Intel® Extension for PyTorch* (IPEX) with feature quantization.
| TensorFlow | [IntelTensorFlow AMX BF16 Training](IntelTensorFlow_AMX_BF16_Training) | Training performance improvements with Intel® AMX BF16.
| TensorFlow | [IntelTensorFlow Enabling Auto Mixed Precision for TransferLearning](IntelTensorFlow_Enabling_Auto_Mixed_Precision_for_TransferLearning) | Enabling auto-mixed precision to use low-precision datatypes, like bfloat16, for transfer learning with TensorFlow*.
| TensorFlow | [IntelTensorFlow InferenceOptimization](IntelTensorFlow_InferenceOptimization) | Optimize a pre-trained model for a better inference performance.
| TensorFlow & Model Zoo | [IntelTensorFlow ModelZoo Inference with FP32 Int8](IntelTensorFlow_ModelZoo_Inference_with_FP32_Int8) | Run ResNet50 inference on Intel's pretrained FP32 and Int8 model.
| TensorFlow & Model Zoo | [IntelTensorFlow PerformanceAnalysis](IntelTensorFlow_PerformanceAnalysis) | Analyze the performance difference between Stock Tensorflow and Intel Tensorflow.
| TensorFlow | [IntelTensorFlow Transformer AMX bfloat16 MixedPrecisiong](IntelTensorFlow_Transformer_AMX_bfloat16_MixedPrecision) | Run a transformer classification model with bfloat16 mixed precision.
| Scikit-learn | [IntelScikitLearn Extensions SVC Adult](IntelScikitLearn_Extensions_SVC_Adult) | Use Intel® Extension for Scikit-learn to accelerate the training and prediction with SVC algorithm on Adult dataset. Compare the performance of SVC algorithm optimized through Intel® Extension for Scikit-learn against original Scikit-learn..

# Using Samples in Intel® DevCloud for oneAPI
To get started using samples in the DevCloud, refer to [Using AI samples in Intel® DevCloud for oneAPI](https://github.com/intel-ai-tce/oneAPI-samples/tree/devcloud/AI-and-Analytics#using-samples-in-intel-oneapi-devcloud).
Expand Down