Skip to content

[MLIR] Use test-lower-to-nvvm for sm_90 Integration Tests on GitHub #68184

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Oct 4, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,25 +1,11 @@
// RUN: mlir-opt %s \
// RUN: -convert-nvgpu-to-nvvm \
// RUN: -gpu-kernel-outlining \
// RUN: -convert-vector-to-scf \
// RUN: -convert-scf-to-cf \
// RUN: -convert-nvvm-to-llvm \
// RUN: -convert-vector-to-llvm \
// RUN: -convert-index-to-llvm=index-bitwidth=32 \
// RUN: -convert-arith-to-llvm \
// RUN: -finalize-memref-to-llvm='use-opaque-pointers=1' \
// RUN: -convert-func-to-llvm \
// RUN: -canonicalize -cse \
// RUN: -expand-strided-metadata --nvvm-attach-target="module=main_kernel features=+ptx80 chip=sm_90 O=3" \
// RUN: | mlir-opt -pass-pipeline='builtin.module(gpu.module(strip-debuginfo,convert-gpu-to-nvvm,convert-index-to-llvm{index-bitwidth=32},canonicalize,cse))' \
// RUN: | mlir-opt --gpu-to-llvm --gpu-module-to-binary=format=%gpu_compilation_format -canonicalize -cse -reconcile-unrealized-casts \
// RUN: -test-lower-to-nvvm="cubin-chip=sm_90 cubin-features=+ptx80 opt-level=3" \
// RUN: | mlir-cpu-runner \
// RUN: --shared-libs=%mlir_cuda_runtime \
// RUN: --shared-libs=%mlir_runner_utils \
// RUN: --entry-point-result=void \
// RUN: | FileCheck %s


// Test swizzling with TMA load
// 128B Swizzle Each numbered cell is 16 byte
// |-------------------------------|
Expand Down
Original file line number Diff line number Diff line change
@@ -1,19 +1,5 @@
// RUN: mlir-opt %s \
// RUN: -convert-nvgpu-to-nvvm \
// RUN: -canonicalize -cse \
// RUN: -gpu-kernel-outlining \
// RUN: -convert-vector-to-scf \
// RUN: -convert-scf-to-cf \
// RUN: -convert-nvvm-to-llvm \
// RUN: -convert-vector-to-llvm \
// RUN: -convert-index-to-llvm=index-bitwidth=32 \
// RUN: -convert-arith-to-llvm \
// RUN: -finalize-memref-to-llvm='use-opaque-pointers=1' \
// RUN: -convert-func-to-llvm \
// RUN: -canonicalize -cse \
// RUN: -expand-strided-metadata --nvvm-attach-target="module=main_kernel features=+ptx80 chip=sm_90 O=3" \
// RUN: | mlir-opt -pass-pipeline='builtin.module(gpu.module(strip-debuginfo,convert-gpu-to-nvvm,convert-index-to-llvm{index-bitwidth=32},canonicalize,cse))' \
// RUN: | mlir-opt --gpu-to-llvm --gpu-module-to-binary -canonicalize -cse -reconcile-unrealized-casts \
// RUN: -test-lower-to-nvvm="cubin-chip=sm_90 cubin-features=+ptx80 opt-level=3" \
// RUN: | mlir-cpu-runner \
// RUN: --shared-libs=%mlir_cuda_runtime \
// RUN: --shared-libs=%mlir_runner_utils \
Expand Down
Original file line number Diff line number Diff line change
@@ -1,16 +1,10 @@
// RUN: mlir-opt %s --convert-nvgpu-to-nvvm \
// RUN: -gpu-kernel-outlining \
// RUN: -convert-nvvm-to-llvm \
// RUN: -convert-scf-to-cf \
// RUN: -convert-vector-to-llvm \
// RUN: -convert-index-to-llvm=index-bitwidth=32 \
// RUN: -convert-arith-to-llvm \
// RUN: -finalize-memref-to-llvm='use-opaque-pointers=1' \
// RUN: -convert-func-to-llvm \
// RUN: -expand-strided-metadata --nvvm-attach-target="module=main_kernel features=+ptx80 chip=sm_90 O=3" \
// RUN: | mlir-opt -pass-pipeline='builtin.module(gpu.module(strip-debuginfo,convert-gpu-to-nvvm,convert-index-to-llvm{index-bitwidth=32},canonicalize,cse))' \
// RUN: | mlir-opt --gpu-to-llvm --gpu-module-to-binary=format=%gpu_compilation_format -canonicalize -cse -reconcile-unrealized-casts -debug-only=serialize-to-isa \
// RUN: 2>&1 | FileCheck %s --check-prefixes=CHECK-PTX
// RUN: mlir-opt %s \
// RUN: -test-lower-to-nvvm="cubin-chip=sm_90 cubin-features=+ptx80 opt-level=3" \
// RUN: | mlir-cpu-runner \
// RUN: --shared-libs=%mlir_cuda_runtime \
// RUN: --shared-libs=%mlir_runner_utils \
// RUN: --entry-point-result=void \
// RUN: | FileCheck %s

// Basic PTX check to make sure we are generating the right instructions.

Expand Down
23 changes: 18 additions & 5 deletions mlir/test/lib/Dialect/GPU/TestLowerToNVVM.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@
#include "mlir/Conversion/MathToLLVM/MathToLLVM.h"
#include "mlir/Conversion/MemRefToLLVM/MemRefToLLVM.h"
#include "mlir/Conversion/NVGPUToNVVM/NVGPUToNVVM.h"
#include "mlir/Conversion/NVVMToLLVM/NVVMToLLVM.h"
#include "mlir/Conversion/ReconcileUnrealizedCasts/ReconcileUnrealizedCasts.h"
#include "mlir/Conversion/SCFToControlFlow/SCFToControlFlow.h"
#include "mlir/Conversion/VectorToLLVM/ConvertVectorToLLVMPass.h"
Expand Down Expand Up @@ -143,11 +144,6 @@ void buildGpuPassPipeline(OpPassManager &pm,
pm.addNestedPass<gpu::GPUModuleOp>(
createConvertGpuOpsToNVVMOps(convertGpuOpsToNVVMOpsOptions));

// TODO: C++20 designated initializers.
ConvertNVGPUToNVVMPassOptions convertNVGPUToNVVMPassOptions;
convertNVGPUToNVVMPassOptions.useOpaquePointers = true;
pm.addNestedPass<gpu::GPUModuleOp>(
createConvertNVGPUToNVVMPass(convertNVGPUToNVVMPassOptions));
pm.addNestedPass<gpu::GPUModuleOp>(createConvertSCFToCFPass());

// Convert vector to LLVM (always needed).
Expand All @@ -157,6 +153,9 @@ void buildGpuPassPipeline(OpPassManager &pm,
pm.addNestedPass<gpu::GPUModuleOp>(
createConvertVectorToLLVMPass(convertVectorToLLVMPassOptions));

// This pass is needed for PTX building
pm.addNestedPass<gpu::GPUModuleOp>(createConvertNVVMToLLVMPass());

// Sprinkle some cleanups.
pm.addPass(createCanonicalizerPass());
pm.addPass(createCSEPass());
Expand All @@ -167,6 +166,20 @@ void buildGpuPassPipeline(OpPassManager &pm,

void buildLowerToNVVMPassPipeline(OpPassManager &pm,
const TestLowerToNVVMOptions &options) {
// Start with a cleanup pass.
pm.addPass(createCanonicalizerPass());
pm.addPass(createCSEPass());

//===----------------------------------------------------------------------===//
// NVGPU lowers device code as well as host code to the driver, so must run
// before outlining.
//===----------------------------------------------------------------------===//
// TODO: C++20 designated initializers.
ConvertNVGPUToNVVMPassOptions convertNVGPUToNVVMPassOptions;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doing this helps me for

  1. Preserves the nvgpu.tensormap.descriptor type for nvgpu.tma.async.load,
  2. Resolves types before GPU outlining, allowing for passing just the device pointer.

To elaborate further, consider this example:

%d = nvgpu.tma.create.descriptor %0 box[%c128, %c64] : memref<*xf16> 
   -> !nvgpu.tensormap.descriptor<tensor = !shmemlhs, l2promo = swizzle_128b, l2promo=none, oob=zero, interleave=none>
...
gpu.launch() {
	nvgpu.tma.async.load %d[...]...
}

tma.create.descriptor:

  1. Invokes the CUDA driver for TMA descriptor generation,
  2. Returns the device pointer only.

tma.async.load:

  1. Generates PTX for TMA load.
  2. Requires knowledge of l2promo and later swizzle.

convertNVGPUToNVVMPassOptions.useOpaquePointers = true;
pm.addNestedPass<func::FuncOp>(
createConvertNVGPUToNVVMPass(convertNVGPUToNVVMPassOptions));

//===----------------------------------------------------------------------===//
// Host-specific stuff.
//===----------------------------------------------------------------------===//
Expand Down