Skip to content

[mlir][linalg] Fix idx comparison in the vectorizer #112900

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Oct 18, 2024

Conversation

banach-space
Copy link
Contributor

Fixes loop comparison condition in the vectorizer.

As that logic is used specifically for vectorising tensor.extract, I
also added a test that violates the assumptions made inside
getTrailingNonUnitLoopDimIdx, namely that Linalg loops are non-empty.
Vectorizer pre-conditions will capture that much earlier making sure
that getTrailingNonUnitLoopDimIdx is only run when all the assumptions
are actually met.

Thank you for pointing this out, @pfusik !

@llvmbot
Copy link
Member

llvmbot commented Oct 18, 2024

@llvm/pr-subscribers-mlir

Author: Andrzej Warzyński (banach-space)

Changes

Fixes loop comparison condition in the vectorizer.

As that logic is used specifically for vectorising tensor.extract, I
also added a test that violates the assumptions made inside
getTrailingNonUnitLoopDimIdx, namely that Linalg loops are non-empty.
Vectorizer pre-conditions will capture that much earlier making sure
that getTrailingNonUnitLoopDimIdx is only run when all the assumptions
are actually met.

Thank you for pointing this out, @pfusik !


Full diff: https://github.com/llvm/llvm-project/pull/112900.diff

2 Files Affected:

  • (modified) mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp (+2-1)
  • (modified) mlir/test/Dialect/Linalg/vectorize-tensor-extract.mlir (+27)
diff --git a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
index 09c6b2683b4388..af5b43d652f685 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
@@ -845,9 +845,10 @@ static uint64_t getTrailingNonUnitLoopDimIdx(LinalgOp linalgOp) {
                  1 &&
              "For statically shaped Linalg Ops, only one "
              "non-unit loop dim is expected");
+  assert(loopRanges.size() != 0 && "Empty loops, nothing to analyse.");
 
   size_t idx = loopRanges.size() - 1;
-  for (; idx >= 0; idx--)
+  for (; idx != 0; idx--)
     if (loopRanges[idx] != 1)
       break;
 
diff --git a/mlir/test/Dialect/Linalg/vectorize-tensor-extract.mlir b/mlir/test/Dialect/Linalg/vectorize-tensor-extract.mlir
index 2c56b7139fec49..3560ab2312a2e9 100644
--- a/mlir/test/Dialect/Linalg/vectorize-tensor-extract.mlir
+++ b/mlir/test/Dialect/Linalg/vectorize-tensor-extract.mlir
@@ -36,6 +36,33 @@ module attributes {transform.with_named_sequence} {
   }
 }
 
+// -----
+
+#map = affine_map<() -> ()>
+func.func @negative_no_loops(%arg0: tensor<f32>, %arg1: tensor<f32>) -> tensor<f32> {
+  %1 = linalg.generic {
+    indexing_maps = [#map],
+    iterator_types = []
+  } outs(%arg1 : tensor<f32>) {
+  ^bb0(%arg4: f32):
+    %2 = tensor.extract %arg0[] : tensor<f32>
+    linalg.yield %2 : f32
+  } -> tensor<f32>
+  return %1 : tensor<f32>
+}
+// CHECK-LABEL: func.func @negative_no_loops
+// CHECK: tensor.extract
+
+module attributes {transform.with_named_sequence} {
+  transform.named_sequence @__transform_main(%arg1: !transform.any_op {transform.readonly}) {
+    %0 = transform.structured.match ops{["linalg.generic"]} in %arg1 : (!transform.any_op) -> !transform.any_op
+    %1 = transform.get_parent_op %0 {isolated_from_above} : (!transform.any_op) -> !transform.any_op
+    %2 = transform.structured.vectorize_children_and_apply_patterns %1 : (!transform.any_op) -> !transform.any_op
+    transform.yield
+  }
+}
+
+
 // -----
 
 #map = affine_map<(d0, d1, d2) -> (d0, d1, d2)>

@llvmbot
Copy link
Member

llvmbot commented Oct 18, 2024

@llvm/pr-subscribers-mlir-linalg

Author: Andrzej Warzyński (banach-space)

Changes

Fixes loop comparison condition in the vectorizer.

As that logic is used specifically for vectorising tensor.extract, I
also added a test that violates the assumptions made inside
getTrailingNonUnitLoopDimIdx, namely that Linalg loops are non-empty.
Vectorizer pre-conditions will capture that much earlier making sure
that getTrailingNonUnitLoopDimIdx is only run when all the assumptions
are actually met.

Thank you for pointing this out, @pfusik !


Full diff: https://github.com/llvm/llvm-project/pull/112900.diff

2 Files Affected:

  • (modified) mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp (+2-1)
  • (modified) mlir/test/Dialect/Linalg/vectorize-tensor-extract.mlir (+27)
diff --git a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
index 09c6b2683b4388..af5b43d652f685 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
@@ -845,9 +845,10 @@ static uint64_t getTrailingNonUnitLoopDimIdx(LinalgOp linalgOp) {
                  1 &&
              "For statically shaped Linalg Ops, only one "
              "non-unit loop dim is expected");
+  assert(loopRanges.size() != 0 && "Empty loops, nothing to analyse.");
 
   size_t idx = loopRanges.size() - 1;
-  for (; idx >= 0; idx--)
+  for (; idx != 0; idx--)
     if (loopRanges[idx] != 1)
       break;
 
diff --git a/mlir/test/Dialect/Linalg/vectorize-tensor-extract.mlir b/mlir/test/Dialect/Linalg/vectorize-tensor-extract.mlir
index 2c56b7139fec49..3560ab2312a2e9 100644
--- a/mlir/test/Dialect/Linalg/vectorize-tensor-extract.mlir
+++ b/mlir/test/Dialect/Linalg/vectorize-tensor-extract.mlir
@@ -36,6 +36,33 @@ module attributes {transform.with_named_sequence} {
   }
 }
 
+// -----
+
+#map = affine_map<() -> ()>
+func.func @negative_no_loops(%arg0: tensor<f32>, %arg1: tensor<f32>) -> tensor<f32> {
+  %1 = linalg.generic {
+    indexing_maps = [#map],
+    iterator_types = []
+  } outs(%arg1 : tensor<f32>) {
+  ^bb0(%arg4: f32):
+    %2 = tensor.extract %arg0[] : tensor<f32>
+    linalg.yield %2 : f32
+  } -> tensor<f32>
+  return %1 : tensor<f32>
+}
+// CHECK-LABEL: func.func @negative_no_loops
+// CHECK: tensor.extract
+
+module attributes {transform.with_named_sequence} {
+  transform.named_sequence @__transform_main(%arg1: !transform.any_op {transform.readonly}) {
+    %0 = transform.structured.match ops{["linalg.generic"]} in %arg1 : (!transform.any_op) -> !transform.any_op
+    %1 = transform.get_parent_op %0 {isolated_from_above} : (!transform.any_op) -> !transform.any_op
+    %2 = transform.structured.vectorize_children_and_apply_patterns %1 : (!transform.any_op) -> !transform.any_op
+    transform.yield
+  }
+}
+
+
 // -----
 
 #map = affine_map<(d0, d1, d2) -> (d0, d1, d2)>

Fixes loop comparison condition in the vectorizer.

As that logic is used specifically for vectorising `tensor.extract`, I
also added a test that violates the assumptions made inside
`getTrailingNonUnitLoopDimIdx`, namely that Linalg loops are non-empty.
Vectorizer pre-conditions will capture that much earlier making sure
that `getTrailingNonUnitLoopDimIdx` is only run when all the assumptions
are actually met.

Thank you for pointing this out, @pfusik !
Copy link
Contributor

@pfusik pfusik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@banach-space banach-space merged commit 0a3347d into llvm:main Oct 18, 2024
6 of 7 checks passed
@banach-space banach-space deleted the andrzej/fix_comparison branch October 18, 2024 15:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants