Skip to content

[mlir][tensor] Add e2e test for tensor.unpack with dynamic tile sizes #121557

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jan 9, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ func.func private @pack(%A: tensor<7x16xi32>) {
%A_cast = tensor.cast %A_pack : tensor<?x16x?x1xi32> to tensor<*xi32>

// Print the results
// CHECK: Unranked Memref base@ = 0{{.*}} rank = 4 offset = 0 sizes = [1, 16, 8, 1] strides = [128, 8, 1, 1] data =
// CHECK: Unranked Memref base@ = 0x{{.*}} rank = 4 offset = 0 sizes = [1, 16, 8, 1] strides = [128, 8, 1, 1] data =
// Tile 1: (8 x 1)
// CHECK-NEXT: 1
// CHECK-NEXT: 2
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
// DEFINE: %{compile} = mlir-opt %s \
// DEFINE: -transform-interpreter -test-transform-dialect-erase-schedule |\
// DEFINE: mlir-opt \
// DEFINE: -test-lower-to-llvm -o %t
// DEFINE: %{entry_point} = main
// DEFINE: %{run} = mlir-cpu-runner %t -e %{entry_point} -entry-point-result=void \
// DEFINE: -shared-libs=%mlir_runner_utils,%mlir_c_runner_utils

// RUN: rm -f %t && %{compile} && %{run} | FileCheck %s

/// End-to-end test for tensor.unpack where one of the inner tile sizes is
/// dynamic.

func.func @main() {
// Allocate and initialise the inputs
%A_alloc = tensor.empty() : tensor<7x3xi32>

%A = arith.constant dense<[
[[[1],
[2],
[3],
[4],
[5],
[6],
[7],
[123]],
[[8],
[9],
[10],
[11],
[12],
[13],
[14],
[123]],
[[15],
[16],
[17],
[18],
[19],
[20],
[21],
[123]]]
]> : tensor<1x3x8x1xi32>

%A_cast = tensor.cast %A : tensor<1x3x8x1xi32> to tensor<?x3x?x1xi32>
func.call @unpack(%A_cast) : (tensor<?x3x?x1xi32>) -> ()

return
}

func.func private @unpack(%A: tensor<?x3x?x1xi32>) {
%c1 = arith.constant 1 : index
%pad_val = arith.constant 123 : i32

// Dynamic tile size
%tile_size = arith.constant 8 : index
Comment on lines +55 to +56
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have a solution yet. Just a note that the test could be outdated if we have a canonicalization pattern or folder to fold it into the unpack op. In IREE, we have some dynamic_constant op to prevent the case. We do not have a similar op in MLIR upstream for testing, so I do not have a suggestion here. :(

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I am a bit concerned about that - thanks for flagging it up!

Now, do we need to worry about this though? The test specifies it's own lowering pipeline (through TD) and canonicalization is used fairly late. So perhaps it will be fine?

Ultimately, my goal is to provide an e2e test that leverages vectorization. This discussion makes me think that only the "scalable vectorization" variants are truly future-proof:

As in, due to "scalability", those tests will just fail if "vectorization" is not used (due to e.g. some other patterns folding things away). The scalability is leveraged here:
https://github.com/llvm/llvm-project/blob/main/mlir/test/Integration/Dialect/Linalg/CPU/ArmSVE/pack-scalable-inner-tile.mlir#L54

Note that I am "forcing" the vector length to be 256 bits, which "auto-magically" makes the tile size "grow" from 8 to 16 (i.e. from default 128 bits to 256 bits). This is only possible when vectorization is used.

Tl;Dr Even if this particular test becomes obsolete, the "scalable" variant (that I am working towards), should remain relevant.

(*) Unwanted from the point of view of this test. Folding constants away is obviously a good thing :)
(**) First, I need to be able to target tensor.insert_slice directly. That's not possible ATM, see this logic in mlir::linalg::vectorize.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you are right about the scalability part.

Now, do we need to worry about this though? The test specifies it's own lowering pipeline (through TD) and canonicalization is used fairly late. So perhaps it will be fine?

I think we do not need to worry about it for now. It is mostly just a note. Some drivers could kick in folders (e.g., OpBuilder::createOrFold) and it becomes an issue when people add folding methods to the op: https://mlir.llvm.org/docs/Canonicalization/#canonicalizing-with-the-fold-method

%A_unpack_empty = tensor.empty() : tensor<7x3xi32>

%A_unpack = tensor.unpack %A
inner_dims_pos = [0, 1]
inner_tiles = [%tile_size, 1]
into %A_unpack_empty : tensor<?x3x?x1xi32> -> tensor<7x3xi32>
%A_cast = tensor.cast %A_unpack : tensor<7x3xi32> to tensor<*xi32>

// Print the results
// CHECK: Unranked Memref base@ = 0x{{.*}} rank = 2 offset = 0 sizes = [7, 3] strides = [3, 1] data =
// CHECK-NEXT: [1, 8, 15],
// CHECK-NEXT: [2, 9, 16],
// CHECK-NEXT: [3, 10, 17],
// CHECK-NEXT: [4, 11, 18],
// CHECK-NEXT: [5, 12, 19],
// CHECK-NEXT: [6, 13, 20],
// CHECK-NEXT: [7, 14, 21]
call @printMemrefI32(%A_cast) : (tensor<*xi32>) -> ()

return
}

module @transforms attributes { transform.with_named_sequence } {
transform.named_sequence @__transform_main(%module: !transform.any_op {transform.consume}) {
%pack = transform.structured.match ops{["tensor.unpack"]} in %module : (!transform.any_op) -> !transform.any_op

// 1. Tile so that we can decompose tensor.pack
// Ops (see step 2)
%c8 = transform.param.constant 8 : i64 -> !transform.param<i64>
%tiled_pack_op_p, %loops:2 = transform.structured.tile_using_for %pack tile_sizes [%c8, 1]
: (!transform.any_op, !transform.param<i64>) -> (!transform.any_op, !transform.any_op, !transform.any_op)

// 2. Decompose the tiled unpack Op into tensor.extract_slice + tensor.insert_slice:
%func_op = transform.get_parent_op %tiled_pack_op_p {isolated_from_above} : (!transform.any_op) -> !transform.op<"func.func">
transform.apply_patterns to %func_op {
transform.apply_patterns.linalg.decompose_pack_unpack
transform.apply_patterns.linalg.decompose_pad
} : !transform.op<"func.func">

// 3. Bufferize before lowering to LLVM
%bufferize = transform.bufferization.one_shot_bufferize %module
{bufferize_function_boundaries=true} : (!transform.any_op) -> !transform.any_op

// 4. Canonicalize
%func_op_bufferized = transform.structured.match ops{["func.func"]} in %bufferize : (!transform.any_op) -> !transform.op<"func.func">
transform.apply_patterns to %func_op_bufferized {
transform.apply_patterns.canonicalization
} : !transform.op<"func.func">

transform.yield
}
}

func.func private @printMemrefI32(%ptr : tensor<*xi32>)
Loading