Skip to content

Final touches to website content for SciPy 2024 #5

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 11 commits into from
Jul 10, 2024
Merged
6 changes: 3 additions & 3 deletions content/en/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,10 @@ title: Portable Data-Parallel Python Extensions with oneAPI
<div class="lead text-center">
<div class="mx-auto mb-5">
<a class="btn btn-lg btn-secondary me-3 mb-4" href="https://IntelPython.github.io/portable-data-parallel-extensions-scipy-2024/docs/">
First<i class="fa-solid fa-question ms-2 "></i>
Get Started<i class="fa-solid fa-play ms-2"></i>
</a>
<a class="btn btn-lg btn-secondary me-3 mb-4" href="https://github.com/google/docsy-example">
Demonstration<i class="fab fa-github ms-2 "></i>
<a class="btn btn-lg btn-secondary me-3 mb-4" href="https://github.com/IntelPython/example-portable-data-parallel-extensions">
Examples<i class="fab fa-github ms-2 "></i>
</a>
</div>
</div>
Expand Down
4 changes: 3 additions & 1 deletion content/en/docs/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,6 @@ by [Nikita Grigorian](https://github.com/ndgrigorian) and [Oleksandr Pavlyk](htt

This poster is intended to introduce writing portable data-parallel Python extensions using oneAPI.

We present several examples, starting with the basics of initializing a USM (universal shared memory) array, then a KDE (kernel density estimation) with pure DPC++/Sycl, then a KDE Python extension, and finally how to write a portable Python extension which uses oneMKL.
We present several examples, starting with the basics of initializing a USM (unified shared memory) array, then a KDE (kernel density estimation) with pure DPC++/Sycl, then a KDE Python extension, and finally how to write a portable Python extension which uses oneMKL.

The examples can be found [here](https://github.com/IntelPython/example-portable-data-parallel-extensions).
8 changes: 4 additions & 4 deletions content/en/docs/kde-cpp.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ for further summation by another kernel operating in a similar fashion.
```

Such an approach, known as tree reduction, is implemented in ``kernel_density_esimation_temps`` function found in
``"steps/kernel_density_estimation_cpp/kde.hpp"``.
[``"steps/kernel_density_estimation_cpp/kde.hpp"``](https://github.com/IntelPython/example-portable-data-parallel-extensions/blob/main/steps/kernel_density_estimation_cpp/kde.hpp).

Use of temporary allocation can be avoided if each work-item atomically adds the value of the local sum to the
appropriate zero-initialized location in the output array, as in implementation ``kernel_density_estimation_atomic_ref``
Expand Down Expand Up @@ -119,10 +119,10 @@ in the work-group without accessing the global memory. This could be done effici
```

Complete implementation can be found in ``kernel_density_estimation_work_group_reduce_and_atomic_ref`` function
in ``"steps/kernel_density_estimation_cpp/kde.hpp"``.
in [``"steps/kernel_density_estimation_cpp/kde.hpp"``](https://github.com/IntelPython/example-portable-data-parallel-extensions/blob/main/steps/kernel_density_estimation_cpp/kde.hpp).

These implementations are called from C++ application ``"steps/kernel_density_estimation_cpp/app.cpp"``, which
These implementations are called from C++ application [``"steps/kernel_density_estimation_cpp/app.cpp"``](https://github.com/IntelPython/example-portable-data-parallel-extensions/blob/main/steps/kernel_density_estimation_cpp/app.cpp), which
samples data uniformly distributed over unit cuboid, and estimates the density using Kernel Density Estimation
and spherically symmetric multivariate Gaussian probability density function as the kernel.

The application can be built using `CMake`, or `Meson`, please refer to [README](steps/kernel_density_estimation_cpp/README.md) document in that folder.
The application can be built using `CMake`, or `Meson`, please refer to [README](https://github.com/IntelPython/example-portable-data-parallel-extensions/blob/main/steps/kernel_density_estimation_cpp/README.md) document in that folder.
16 changes: 9 additions & 7 deletions content/en/docs/kde-python.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ date: 2024-07-02
weight: 3
---

Since SYCL builds on C++, we are going to use `pybind11` project to generate Python extension.
Since SYCL builds on C++, we are going to use the `pybind11` project to generate a Python extension.
We also need Python objects to carry USM allocations of input and output data, such as `dpctl` ([Data Parallel Control](https://github.com/IntelPython/dpctl.git) Python package). The `dpctl` package also provides Python objects corresponding to DPC++ runtime objects:

| Python object | SYCL C++ object |
Expand All @@ -15,9 +15,9 @@ We also need Python objects to carry USM allocations of input and output data, s
| ``dpctl.SyclContext`` | ``sycl::context`` |
| ``dpctl.SyclEvent`` | ``sycl::event`` |

`dpctl` provides integration with `pybind11` supporting castings between `dpctl` Python objects and corresponding C++ SYCL classes listed in the table above. Furthermore, the integration provides C++ class ``dpctl::tensor::usm_ndarray`` which derives from ``pybind11::object``.
It stores `dpctl.tensor.usm_ndarray` object and provides methods to query its attributes, such as data pointer, dimensionality, shape, strides
and elemental type information.
`dpctl` provides integration with `pybind11` supporting castings between `dpctl` Python objects and corresponding C++ SYCL classes listed in the table above. Furthermore, the integration provides the C++ class ``dpctl::tensor::usm_ndarray`` which derives from ``pybind11::object``.
It stores the `dpctl.tensor.usm_ndarray` object and provides methods to query its attributes, such as data pointer, dimensionality, shape, strides
and elemental type information. Underlying `dpctl.tensor.usm_ndarray` is a SYCL unified shared memory (USM) allocation. See the [SYCL standard](https://registry.khronos.org/SYCL/specs/sycl-2020/html/sycl-2020.html#sec:usm) or [dpctl.memory documentation](https://intelpython.github.io/dpctl/latest/api_reference/dpctl/memory.html#dpctl-memory-pyapi) for more details.

For illustration purpose, here is a sample extension source code:

Expand All @@ -29,7 +29,9 @@ For illustration purpose, here is a sample extension source code:
#include <vector>

sycl::event
py_foo(dpctl::tensor::usm_ndarray inp, dpctl::tensor::usm_ndarray out, const std::vector<sycl::event> &deps) {
py_foo(dpctl::tensor::usm_ndarray inp,
dpctl::tensor::usm_ndarray out,
const std::vector<sycl::event> &deps) {
// validation steps skipped

// Execution queue is the queue associated with input arrays
Expand Down Expand Up @@ -98,12 +100,12 @@ of the host task a chance at execution.
Of course, if USM memory is not managed by Python, it may be possible to avoid using GIL altogether.

An example of Python extension `"kde_sycl_ext"` that exposes kernel density estimation code from previous
section can be found in `"steps/sycl_python_extension"` folder (see [README](steps/sycl_python_extension/README.md)).
section can be found in [`"steps/sycl_python_extension"`](https://github.com/IntelPython/example-portable-data-parallel-extensions/tree/main/steps/sycl_python_extension) folder (see [README](https://github.com/IntelPython/example-portable-data-parallel-extensions/blob/main/steps/sycl_python_extension/README.md)).

The folder contains comparison between `dpctl`-based implementation of the KDE implementation following the NumPy
implementation [above](#kde_numpy) and the dedicated C++ code:

```
```bash
KDE for n_sample = 1000000, n_est = 17, n_dim = 7, h = 0.05
Result agreed.
kde_dpctl took 0.3404452269896865 seconds
Expand Down
36 changes: 32 additions & 4 deletions content/en/docs/oneMKL.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,18 @@ date: 2024-07-02
weight: 4
---

Since `dpctl.tensor.usm_ndarray` is a Python object carrying a USM allocation, it is possible to write extensions which wrap `oneAPI Math Kernel Library Interfaces` ([oneMKL Interfaces](https://github.com/oneapi-src/oneMKL)) routines and then call them on the USM data underlying the `usm_ndarray` container from Python.
Given a matrix \\(A\\), the QR decomposition of \\(A\\) is defined as the decomposition of \\(A\\) into the product of matrices \\(Q\\) and \\(R\\) such that \\(Q\\) is orthonormal and \\(R\\) is upper-triangular.

QR factorization is a common routine in more optimized LAPACK libraries, so rather than write and implement an algorithm ourselves, it would be preferable to find a suitable library routine.

Since `dpctl.tensor.usm_ndarray` is a Python object with an underlying USM allocation, it is possible to write extensions which wrap `oneAPI Math Kernel Library Interfaces` ([oneMKL Interfaces](https://github.com/oneapi-src/oneMKL)) USM routines and then call them on the `dpctl.tensor.usm_ndarray` from Python. These low-level routines can greatly improve the performance of an extension.

`oneMKL Interfaces` can be built to dispatch to a variety of backends including `cuBLAS` and `rocBLAS` (see [oneMKL Interfaces README](https://github.com/oneapi-src/oneMKL?tab=readme-ov-file#oneapi-math-kernel-library-onemkl-interfaces)). The [`portBLAS`](https://github.com/codeplaysoftware/portBLAS) backend is also notable as it is open-source and written in pure SYCL.

`oneMKL` routines are essentially wrappers for the same routine in an underlying backend library, depending on the targeted device. This means that the same code can be used for NVidia, AMD, and Intel devices, making it highly portable.

Looking to the `oneMKL` documentation on [`geqrf`](https://spec.oneapi.io/versions/latest/elements/oneMKL/source/domains/lapack/geqrf.html#geqrf-usm-version):

For an example routine from the `oneMKL` documentation, take [`geqrf`](https://spec.oneapi.io/versions/latest/elements/oneMKL/source/domains/lapack/geqrf.html#geqrf-usm-version):
```cpp
namespace oneapi::mkl::lapack {
cl::sycl::event geqrf(cl::sycl::queue &queue,
Expand All @@ -22,6 +31,25 @@ namespace oneapi::mkl::lapack {
}
```

The `pybind11` castings discussed in the previous section enable us to write a simple wrapper function for this routine with `dpctl::tensor::usm_ndarray` inputs and outputs, so long as we take the same precautions to avoid deadlocks. As a result, we can write the extension in much the same way as the `kde_sycl_ext` extension in the previous chapter.
This general format (``sycl::queue``, arguments, and a vector of ``sycl::event``s) is more or less the same throughout the `oneMKL` USM routines.

The `pybind11` castings discussed in the previous section enable us to write a simple wrapper function for this routine with ``dpctl::tensor::usm_ndarray`` inputs and outputs, so long as we take the same precautions to avoid deadlocks. As a result, we can write the extension in much the same way as the `"kde_sycl_ext"` extension in the previous chapter.

An example of a Python extension "mkl_interface_ext" that uses `oneMKL` calls to implement a QR decomposition can be found in "steps/mkl_interface" folder (see [README](steps/mkl_interface/README.md)).
An example of a Python extension `"mkl_interface_ext"` that uses `oneMKL` calls to implement a QR decomposition can be found in [`"steps/mkl_interface"`](https://github.com/IntelPython/example-portable-data-parallel-extensions/tree/main/steps/mkl_interface) folder (see [README](https://github.com/IntelPython/example-portable-data-parallel-extensions/blob/main/steps/mkl_interface/README.md)).

The folder executes the tests found in [`"steps/mkl_interface/tests"`](https://github.com/IntelPython/example-portable-data-parallel-extensions/tree/main/steps/mkl_interface/tests) as well as running a larger benchmark which compares Numpy's `linalg.qr` (for reference) to the extension's implementation:

```bash
$ python run.py
Using device NVIDIA GeForce GT 1030
================================================= test session starts ==================================================
collected 8 items

tests/test_qr.py ........ [100%]

================================================== 8 passed in 0.45s ===================================================
QR decomposition for matrix of size = (3000, 3000)
Result agreed.
qr took 0.016026005148887634 seconds
np.linalg.qr took 0.5165981948375702 seconds
```
1 change: 0 additions & 1 deletion layouts/404.html
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,5 @@
<div class="td-content">
<h1>Not found</h1>
<p>Oops! This page doesn't exist. Try going back to the <a href="{{ "" | relURL }}">home page</a>.</p>
<p>You can learn how to make a 404 page like this in <a href="https://gohugo.io/templates/404/">Custom 404 Pages</a>.</p>
</div>
{{- end }}