-
Notifications
You must be signed in to change notification settings - Fork 13.5k
Revert Add host-supports-nvptx requirement to lit tests
(#66102 and #66129)
#66225
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
olegshyshkov
approved these changes
Sep 13, 2023
@llvm/pr-subscribers-mlir-sparse @llvm/pr-subscribers-mlir-vector ChangesNone -- Full diff: https://github.com//pull/66225.diff25 Files Affected:
diff --git a/mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/dump-ptx.mlir b/mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/dump-ptx.mlir index 5b4bdbe31dab334..0cb06b7bf1d2001 100644 --- a/mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/dump-ptx.mlir +++ b/mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/dump-ptx.mlir @@ -1,5 +1,3 @@ -// REQUIRES: host-supports-nvptx - // RUN: mlir-opt %s \ // RUN: | mlir-opt -test-lower-to-nvvm -debug-only=serialize-to-isa \ // RUN: 2>&1 | FileCheck %s diff --git a/mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-matvec-const.mlir b/mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-matvec-const.mlir index ca47de6cca27f6d..2c09ae298e353a9 100644 --- a/mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-matvec-const.mlir +++ b/mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-matvec-const.mlir @@ -1,4 +1,3 @@ -// REQUIRES: host-supports-nvptx // // NOTE: this test requires gpu-sm80 // diff --git a/mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-matvec.mlir b/mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-matvec.mlir index c5c3546cdf01694..c032201b781f5ee 100644 --- a/mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-matvec.mlir +++ b/mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-matvec.mlir @@ -1,4 +1,3 @@ -// REQUIRES: host-supports-nvptx // // NOTE: this test requires gpu-sm80 // diff --git a/mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-mma-2-4-f16.mlir b/mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-mma-2-4-f16.mlir index aee8a6a6558e4f5..80972f244ec02d7 100644 --- a/mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-mma-2-4-f16.mlir +++ b/mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-mma-2-4-f16.mlir @@ -1,5 +1,3 @@ -// REQUIRES: host-supports-nvptx - // // NOTE: this test requires gpu-sm80 // diff --git a/mlir/test/Integration/Dialect/Vector/GPU/CUDA/test-reduction-distribute.mlir b/mlir/test/Integration/Dialect/Vector/GPU/CUDA/test-reduction-distribute.mlir index bc5737427a15160..8c991493a2b0174 100644 --- a/mlir/test/Integration/Dialect/Vector/GPU/CUDA/test-reduction-distribute.mlir +++ b/mlir/test/Integration/Dialect/Vector/GPU/CUDA/test-reduction-distribute.mlir @@ -1,5 +1,3 @@ -// REQUIRES: host-supports-nvptx - // RUN: mlir-opt %s -test-vector-warp-distribute="hoist-uniform distribute-transfer-write propagate-distribution" -canonicalize |\ // RUN: mlir-opt -test-vector-warp-distribute=rewrite-warp-ops-to-scf-if |\ // RUN: mlir-opt -lower-affine -convert-vector-to-scf -convert-scf-to-cf -convert-vector-to-llvm \ diff --git a/mlir/test/Integration/Dialect/Vector/GPU/CUDA/test-warp-distribute.mlir b/mlir/test/Integration/Dialect/Vector/GPU/CUDA/test-warp-distribute.mlir index 0efac62bbd7afee..f26c18c4ae3dd28 100644 --- a/mlir/test/Integration/Dialect/Vector/GPU/CUDA/test-warp-distribute.mlir +++ b/mlir/test/Integration/Dialect/Vector/GPU/CUDA/test-warp-distribute.mlir @@ -1,5 +1,3 @@ -// REQUIRES: host-supports-nvptx - // Run the test cases without distributing ops to test default lowering. Run // everything on the same thread. // RUN: mlir-opt %s -test-vector-warp-distribute=rewrite-warp-ops-to-scf-if -canonicalize | \ diff --git a/mlir/test/Integration/GPU/CUDA/TensorCore/sm80/transform-mma-sync-matmul-f16-f16-accum.mlir b/mlir/test/Integration/GPU/CUDA/TensorCore/sm80/transform-mma-sync-matmul-f16-f16-accum.mlir index d959fdb6a9db178..56d1e6d2973562b 100644 --- a/mlir/test/Integration/GPU/CUDA/TensorCore/sm80/transform-mma-sync-matmul-f16-f16-accum.mlir +++ b/mlir/test/Integration/GPU/CUDA/TensorCore/sm80/transform-mma-sync-matmul-f16-f16-accum.mlir @@ -1,5 +1,3 @@ -// REQUIRES: host-supports-nvptx - // RUN: mlir-opt %s \ // RUN: -test-transform-dialect-interpreter \ // RUN: -test-transform-dialect-erase-schedule \ diff --git a/mlir/test/Integration/GPU/CUDA/TensorCore/sm80/transform-mma-sync-matmul-f32.mlir b/mlir/test/Integration/GPU/CUDA/TensorCore/sm80/transform-mma-sync-matmul-f32.mlir index 0ec15f2a9c79d70..357ab8ec4d75921 100644 --- a/mlir/test/Integration/GPU/CUDA/TensorCore/sm80/transform-mma-sync-matmul-f32.mlir +++ b/mlir/test/Integration/GPU/CUDA/TensorCore/sm80/transform-mma-sync-matmul-f32.mlir @@ -1,5 +1,3 @@ -// REQUIRES: host-supports-nvptx -// // RUN: mlir-opt %s \ // RUN: -test-transform-dialect-interpreter \ // RUN: | FileCheck %s --check-prefix=CHECK-MMA-SYNC diff --git a/mlir/test/Integration/GPU/CUDA/TensorCore/wmma-matmul-f16.mlir b/mlir/test/Integration/GPU/CUDA/TensorCore/wmma-matmul-f16.mlir index 4d8a281113593c6..591bf1b4fd18231 100644 --- a/mlir/test/Integration/GPU/CUDA/TensorCore/wmma-matmul-f16.mlir +++ b/mlir/test/Integration/GPU/CUDA/TensorCore/wmma-matmul-f16.mlir @@ -1,5 +1,3 @@ -// REQUIRES: host-supports-nvptx - // RUN: mlir-opt %s \ // RUN: | mlir-opt -test-lower-to-nvvm="cubin-chip=sm_70" \ // RUN: | mlir-cpu-runner \ diff --git a/mlir/test/Integration/GPU/CUDA/TensorCore/wmma-matmul-f32-bare-ptr.mlir b/mlir/test/Integration/GPU/CUDA/TensorCore/wmma-matmul-f32-bare-ptr.mlir index 664d344b2769bf7..51bd23f817b33f1 100644 --- a/mlir/test/Integration/GPU/CUDA/TensorCore/wmma-matmul-f32-bare-ptr.mlir +++ b/mlir/test/Integration/GPU/CUDA/TensorCore/wmma-matmul-f32-bare-ptr.mlir @@ -1,5 +1,3 @@ -// REQUIRES: host-supports-nvptx - // Tests memref bare pointer lowering convention both host side and kernel-side; // this works for only statically shaped memrefs. // Similar to the wmma-matmul-f32 but but with the memref bare pointer lowering convention. diff --git a/mlir/test/Integration/GPU/CUDA/TensorCore/wmma-matmul-f32.mlir b/mlir/test/Integration/GPU/CUDA/TensorCore/wmma-matmul-f32.mlir index 4d76eb898dc2935..0307b3d504be9f6 100644 --- a/mlir/test/Integration/GPU/CUDA/TensorCore/wmma-matmul-f32.mlir +++ b/mlir/test/Integration/GPU/CUDA/TensorCore/wmma-matmul-f32.mlir @@ -1,5 +1,3 @@ -// REQUIRES: host-supports-nvptx - // RUN: mlir-opt %s \ // RUN: | mlir-opt -test-lower-to-nvvm="cubin-chip=sm_70" \ // RUN: | mlir-cpu-runner \ diff --git a/mlir/test/Integration/GPU/CUDA/all-reduce-and.mlir b/mlir/test/Integration/GPU/CUDA/all-reduce-and.mlir index c48a515ed022135..b131b8682ddee06 100644 --- a/mlir/test/Integration/GPU/CUDA/all-reduce-and.mlir +++ b/mlir/test/Integration/GPU/CUDA/all-reduce-and.mlir @@ -1,5 +1,3 @@ -// REQUIRES: host-supports-nvptx - // RUN: mlir-opt %s \ // RUN: | mlir-opt -test-lower-to-nvvm \ // RUN: | mlir-cpu-runner \ diff --git a/mlir/test/Integration/GPU/CUDA/all-reduce-max.mlir b/mlir/test/Integration/GPU/CUDA/all-reduce-max.mlir index e8ffc3f830c7c91..155423db7e05049 100644 --- a/mlir/test/Integration/GPU/CUDA/all-reduce-max.mlir +++ b/mlir/test/Integration/GPU/CUDA/all-reduce-max.mlir @@ -1,5 +1,3 @@ -// REQUIRES: host-supports-nvptx - // RUN: mlir-opt %s \ // RUN: | mlir-opt -test-lower-to-nvvm \ // RUN: | mlir-cpu-runner \ diff --git a/mlir/test/Integration/GPU/CUDA/all-reduce-min.mlir b/mlir/test/Integration/GPU/CUDA/all-reduce-min.mlir index fde50e9b6b92fbd..e5047b6efa3bf25 100644 --- a/mlir/test/Integration/GPU/CUDA/all-reduce-min.mlir +++ b/mlir/test/Integration/GPU/CUDA/all-reduce-min.mlir @@ -1,5 +1,3 @@ -// REQUIRES: host-supports-nvptx - // RUN: mlir-opt %s \ // RUN: | mlir-opt -test-lower-to-nvvm \ // RUN: | mlir-cpu-runner \ diff --git a/mlir/test/Integration/GPU/CUDA/all-reduce-op.mlir b/mlir/test/Integration/GPU/CUDA/all-reduce-op.mlir index 08c3571ef1c35fa..163e9fdba60c1a9 100644 --- a/mlir/test/Integration/GPU/CUDA/all-reduce-op.mlir +++ b/mlir/test/Integration/GPU/CUDA/all-reduce-op.mlir @@ -1,5 +1,3 @@ -// REQUIRES: host-supports-nvptx - // RUN: mlir-opt %s \ // RUN: | mlir-opt -test-lower-to-nvvm \ // RUN: | mlir-cpu-runner \ diff --git a/mlir/test/Integration/GPU/CUDA/all-reduce-or.mlir b/mlir/test/Integration/GPU/CUDA/all-reduce-or.mlir index 134296f39c2b49e..381db2639c371f3 100644 --- a/mlir/test/Integration/GPU/CUDA/all-reduce-or.mlir +++ b/mlir/test/Integration/GPU/CUDA/all-reduce-or.mlir @@ -1,5 +1,3 @@ -// REQUIRES: host-supports-nvptx - // RUN: mlir-opt %s \ // RUN: | mlir-opt -test-lower-to-nvvm \ // RUN: | mlir-cpu-runner \ diff --git a/mlir/test/Integration/GPU/CUDA/all-reduce-region.mlir b/mlir/test/Integration/GPU/CUDA/all-reduce-region.mlir index c2be1b65950ea51..23c6c117e67f36b 100644 --- a/mlir/test/Integration/GPU/CUDA/all-reduce-region.mlir +++ b/mlir/test/Integration/GPU/CUDA/all-reduce-region.mlir @@ -1,5 +1,3 @@ -// REQUIRES: host-supports-nvptx - // RUN: mlir-opt %s \ // RUN: | mlir-opt -test-lower-to-nvvm \ // RUN: | mlir-cpu-runner \ diff --git a/mlir/test/Integration/GPU/CUDA/all-reduce-xor.mlir b/mlir/test/Integration/GPU/CUDA/all-reduce-xor.mlir index 6b75321b7bfc235..3c5a100b5b90d57 100644 --- a/mlir/test/Integration/GPU/CUDA/all-reduce-xor.mlir +++ b/mlir/test/Integration/GPU/CUDA/all-reduce-xor.mlir @@ -1,5 +1,3 @@ -// REQUIRES: host-supports-nvptx - // RUN: mlir-opt %s \ // RUN: | mlir-opt -test-lower-to-nvvm \ // RUN: | mlir-cpu-runner \ diff --git a/mlir/test/Integration/GPU/CUDA/async.mlir b/mlir/test/Integration/GPU/CUDA/async.mlir index 1314d32a779a883..d2a5127a34c3bdd 100644 --- a/mlir/test/Integration/GPU/CUDA/async.mlir +++ b/mlir/test/Integration/GPU/CUDA/async.mlir @@ -1,5 +1,3 @@ -// REQUIRES: host-supports-nvptx - // RUN: mlir-opt %s \ // RUN: | mlir-opt -gpu-kernel-outlining \ // RUN: | mlir-opt -pass-pipeline='builtin.module(gpu.module(strip-debuginfo,convert-gpu-to-nvvm),nvvm-attach-target)' \ diff --git a/mlir/test/Integration/GPU/CUDA/gpu-to-cubin.mlir b/mlir/test/Integration/GPU/CUDA/gpu-to-cubin.mlir index abc93f7b1703a66..a5d04f7322b4914 100644 --- a/mlir/test/Integration/GPU/CUDA/gpu-to-cubin.mlir +++ b/mlir/test/Integration/GPU/CUDA/gpu-to-cubin.mlir @@ -1,5 +1,3 @@ -// REQUIRES: host-supports-nvptx - // RUN: mlir-opt %s \ // RUN: | mlir-opt -test-lower-to-nvvm \ // RUN: | mlir-cpu-runner \ diff --git a/mlir/test/Integration/GPU/CUDA/multiple-all-reduce.mlir b/mlir/test/Integration/GPU/CUDA/multiple-all-reduce.mlir index 3389f805ac63d0f..7657bf4732d32b7 100644 --- a/mlir/test/Integration/GPU/CUDA/multiple-all-reduce.mlir +++ b/mlir/test/Integration/GPU/CUDA/multiple-all-reduce.mlir @@ -1,5 +1,3 @@ -// REQUIRES: host-supports-nvptx - // RUN: mlir-opt %s \ // RUN: | mlir-opt -test-lower-to-nvvm \ // RUN: | mlir-cpu-runner \ diff --git a/mlir/test/Integration/GPU/CUDA/printf.mlir b/mlir/test/Integration/GPU/CUDA/printf.mlir index eef5ac66ca52ad4..1a35d1e78b09475 100644 --- a/mlir/test/Integration/GPU/CUDA/printf.mlir +++ b/mlir/test/Integration/GPU/CUDA/printf.mlir @@ -1,5 +1,3 @@ -// REQUIRES: host-supports-nvptx - // RUN: mlir-opt %s \ // RUN: | mlir-opt -test-lower-to-nvvm \ // RUN: | mlir-cpu-runner \ diff --git a/mlir/test/Integration/GPU/CUDA/shuffle.mlir b/mlir/test/Integration/GPU/CUDA/shuffle.mlir index 05cb854d18dd4f3..40fcea857d5b4eb 100644 --- a/mlir/test/Integration/GPU/CUDA/shuffle.mlir +++ b/mlir/test/Integration/GPU/CUDA/shuffle.mlir @@ -1,5 +1,3 @@ -// REQUIRES: host-supports-nvptx - // RUN: mlir-opt %s \ // RUN: | mlir-opt -test-lower-to-nvvm \ // RUN: | mlir-cpu-runner \ diff --git a/mlir/test/Integration/GPU/CUDA/sm90/transform-dialect/tma_load_64x8_8x128_noswizzle-transform.mlir b/mlir/test/Integration/GPU/CUDA/sm90/transform-dialect/tma_load_64x8_8x128_noswizzle-transform.mlir index e66978bc594b1b8..882c63a866eb4f3 100644 --- a/mlir/test/Integration/GPU/CUDA/sm90/transform-dialect/tma_load_64x8_8x128_noswizzle-transform.mlir +++ b/mlir/test/Integration/GPU/CUDA/sm90/transform-dialect/tma_load_64x8_8x128_noswizzle-transform.mlir @@ -1,5 +1,3 @@ -// REQUIRES: host-supports-nvptx - // RUN: mlir-opt %s \ // RUN: -test-transform-dialect-interpreter \ // RUN: -test-transform-dialect-erase-schedule \ diff --git a/mlir/test/Integration/GPU/CUDA/two-modules.mlir b/mlir/test/Integration/GPU/CUDA/two-modules.mlir index fde66de2fce6e7e..5a9acdf3d8da6ba 100644 --- a/mlir/test/Integration/GPU/CUDA/two-modules.mlir +++ b/mlir/test/Integration/GPU/CUDA/two-modules.mlir @@ -1,5 +1,3 @@ -// REQUIRES: host-supports-nvptx - // RUN: mlir-opt %s \ // RUN: | mlir-opt -test-lower-to-nvvm \ // RUN: | mlir-cpu-runner \ |
Thanks @frgossen ! |
ZijunZhaoCCK
pushed a commit
to ZijunZhaoCCK/llvm-project
that referenced
this pull request
Sep 19, 2023
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.