-
Notifications
You must be signed in to change notification settings - Fork 13.5k
[mlir][linalg] Fix idx comparison in the vectorizer #112900
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@llvm/pr-subscribers-mlir Author: Andrzej Warzyński (banach-space) ChangesFixes loop comparison condition in the vectorizer. As that logic is used specifically for vectorising Thank you for pointing this out, @pfusik ! Full diff: https://github.com/llvm/llvm-project/pull/112900.diff 2 Files Affected:
diff --git a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
index 09c6b2683b4388..af5b43d652f685 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
@@ -845,9 +845,10 @@ static uint64_t getTrailingNonUnitLoopDimIdx(LinalgOp linalgOp) {
1 &&
"For statically shaped Linalg Ops, only one "
"non-unit loop dim is expected");
+ assert(loopRanges.size() != 0 && "Empty loops, nothing to analyse.");
size_t idx = loopRanges.size() - 1;
- for (; idx >= 0; idx--)
+ for (; idx != 0; idx--)
if (loopRanges[idx] != 1)
break;
diff --git a/mlir/test/Dialect/Linalg/vectorize-tensor-extract.mlir b/mlir/test/Dialect/Linalg/vectorize-tensor-extract.mlir
index 2c56b7139fec49..3560ab2312a2e9 100644
--- a/mlir/test/Dialect/Linalg/vectorize-tensor-extract.mlir
+++ b/mlir/test/Dialect/Linalg/vectorize-tensor-extract.mlir
@@ -36,6 +36,33 @@ module attributes {transform.with_named_sequence} {
}
}
+// -----
+
+#map = affine_map<() -> ()>
+func.func @negative_no_loops(%arg0: tensor<f32>, %arg1: tensor<f32>) -> tensor<f32> {
+ %1 = linalg.generic {
+ indexing_maps = [#map],
+ iterator_types = []
+ } outs(%arg1 : tensor<f32>) {
+ ^bb0(%arg4: f32):
+ %2 = tensor.extract %arg0[] : tensor<f32>
+ linalg.yield %2 : f32
+ } -> tensor<f32>
+ return %1 : tensor<f32>
+}
+// CHECK-LABEL: func.func @negative_no_loops
+// CHECK: tensor.extract
+
+module attributes {transform.with_named_sequence} {
+ transform.named_sequence @__transform_main(%arg1: !transform.any_op {transform.readonly}) {
+ %0 = transform.structured.match ops{["linalg.generic"]} in %arg1 : (!transform.any_op) -> !transform.any_op
+ %1 = transform.get_parent_op %0 {isolated_from_above} : (!transform.any_op) -> !transform.any_op
+ %2 = transform.structured.vectorize_children_and_apply_patterns %1 : (!transform.any_op) -> !transform.any_op
+ transform.yield
+ }
+}
+
+
// -----
#map = affine_map<(d0, d1, d2) -> (d0, d1, d2)>
|
@llvm/pr-subscribers-mlir-linalg Author: Andrzej Warzyński (banach-space) ChangesFixes loop comparison condition in the vectorizer. As that logic is used specifically for vectorising Thank you for pointing this out, @pfusik ! Full diff: https://github.com/llvm/llvm-project/pull/112900.diff 2 Files Affected:
diff --git a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
index 09c6b2683b4388..af5b43d652f685 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
@@ -845,9 +845,10 @@ static uint64_t getTrailingNonUnitLoopDimIdx(LinalgOp linalgOp) {
1 &&
"For statically shaped Linalg Ops, only one "
"non-unit loop dim is expected");
+ assert(loopRanges.size() != 0 && "Empty loops, nothing to analyse.");
size_t idx = loopRanges.size() - 1;
- for (; idx >= 0; idx--)
+ for (; idx != 0; idx--)
if (loopRanges[idx] != 1)
break;
diff --git a/mlir/test/Dialect/Linalg/vectorize-tensor-extract.mlir b/mlir/test/Dialect/Linalg/vectorize-tensor-extract.mlir
index 2c56b7139fec49..3560ab2312a2e9 100644
--- a/mlir/test/Dialect/Linalg/vectorize-tensor-extract.mlir
+++ b/mlir/test/Dialect/Linalg/vectorize-tensor-extract.mlir
@@ -36,6 +36,33 @@ module attributes {transform.with_named_sequence} {
}
}
+// -----
+
+#map = affine_map<() -> ()>
+func.func @negative_no_loops(%arg0: tensor<f32>, %arg1: tensor<f32>) -> tensor<f32> {
+ %1 = linalg.generic {
+ indexing_maps = [#map],
+ iterator_types = []
+ } outs(%arg1 : tensor<f32>) {
+ ^bb0(%arg4: f32):
+ %2 = tensor.extract %arg0[] : tensor<f32>
+ linalg.yield %2 : f32
+ } -> tensor<f32>
+ return %1 : tensor<f32>
+}
+// CHECK-LABEL: func.func @negative_no_loops
+// CHECK: tensor.extract
+
+module attributes {transform.with_named_sequence} {
+ transform.named_sequence @__transform_main(%arg1: !transform.any_op {transform.readonly}) {
+ %0 = transform.structured.match ops{["linalg.generic"]} in %arg1 : (!transform.any_op) -> !transform.any_op
+ %1 = transform.get_parent_op %0 {isolated_from_above} : (!transform.any_op) -> !transform.any_op
+ %2 = transform.structured.vectorize_children_and_apply_patterns %1 : (!transform.any_op) -> !transform.any_op
+ transform.yield
+ }
+}
+
+
// -----
#map = affine_map<(d0, d1, d2) -> (d0, d1, d2)>
|
Fixes loop comparison condition in the vectorizer. As that logic is used specifically for vectorising `tensor.extract`, I also added a test that violates the assumptions made inside `getTrailingNonUnitLoopDimIdx`, namely that Linalg loops are non-empty. Vectorizer pre-conditions will capture that much earlier making sure that `getTrailingNonUnitLoopDimIdx` is only run when all the assumptions are actually met. Thank you for pointing this out, @pfusik !
c864b79
to
f6794f5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Fixes loop comparison condition in the vectorizer.
As that logic is used specifically for vectorising
tensor.extract
, Ialso added a test that violates the assumptions made inside
getTrailingNonUnitLoopDimIdx
, namely that Linalg loops are non-empty.Vectorizer pre-conditions will capture that much earlier making sure
that
getTrailingNonUnitLoopDimIdx
is only run when all the assumptionsare actually met.
Thank you for pointing this out, @pfusik !