Skip to content

[ValueTypes][RISCV] Add v1bf16 type #111112

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Oct 6, 2024
Merged

[ValueTypes][RISCV] Add v1bf16 type #111112

merged 2 commits into from
Oct 6, 2024

Conversation

lukel97
Copy link
Contributor

@lukel97 lukel97 commented Oct 4, 2024

When trying to add RISC-V fadd reduction cost model tests for bf16, I noticed a crash when the vector was of <1 x bfloat>.

It turns out that this was being scalarized because unlike f16/f32/f64, there's no v1bf16 value type, and the existing cost model code assumed that the legalized type would always be a vector.

This adds v1bf16 to bring bf16 in line with the other fp types.

It also adds some more RISC-V bf16 reduction tests which previously crashed, including tests to ensure that SLP won't emit fadd/fmul reductions for bf16 or f16 w/ zvfhmin after #111000.

When trying to add RISC-V fadd reduction cost model tests for bf16, I noticed a crash when the vector was of <1 x bfloat>.

It turns out that this was being scalarized because unlike f16/f32/f64, there's no v1bf16 value type, and the existing cost model code assumed that the legalized type would always be a vector.

This adds v1bf16 to bring bf16 in line with the other fp types.

It also adds some more RISC-V bf16 reduction tests which previously crashed, including tests to ensure that SLP won't emit fadd/fmul reductions for bf16 or f16 w/ zvfhmin after llvm#111000.
@llvmbot
Copy link
Member

llvmbot commented Oct 4, 2024

@llvm/pr-subscribers-llvm-analysis

Author: Luke Lau (lukel97)

Changes

When trying to add RISC-V fadd reduction cost model tests for bf16, I noticed a crash when the vector was of <1 x bfloat>.

It turns out that this was being scalarized because unlike f16/f32/f64, there's no v1bf16 value type, and the existing cost model code assumed that the legalized type would always be a vector.

This adds v1bf16 to bring bf16 in line with the other fp types.

It also adds some more RISC-V bf16 reduction tests which previously crashed, including tests to ensure that SLP won't emit fadd/fmul reductions for bf16 or f16 w/ zvfhmin after #111000.


Patch is 61.71 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/111112.diff

6 Files Affected:

  • (modified) llvm/include/llvm/CodeGen/ValueTypes.td (+147-146)
  • (modified) llvm/test/Analysis/CostModel/RISCV/arith-fp.ll (+3-3)
  • (modified) llvm/test/Analysis/CostModel/RISCV/reduce-fadd.ll (+71-3)
  • (modified) llvm/test/Analysis/CostModel/RISCV/reduce-fmul.ll (+71-3)
  • (modified) llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfwmaccbf16.ll (+26-188)
  • (modified) llvm/test/Transforms/SLPVectorizer/RISCV/reductions.ll (+152-9)
diff --git a/llvm/include/llvm/CodeGen/ValueTypes.td b/llvm/include/llvm/CodeGen/ValueTypes.td
index ea2c80eaf95836..493c0cfcab60ce 100644
--- a/llvm/include/llvm/CodeGen/ValueTypes.td
+++ b/llvm/include/llvm/CodeGen/ValueTypes.td
@@ -179,157 +179,158 @@ def v128f16  : VTVec<128,  f16,  96>;  //  128 x f16 vector value
 def v256f16  : VTVec<256,  f16,  97>;  //  256 x f16 vector value
 def v512f16  : VTVec<512,  f16,  98>;  //  512 x f16 vector value
 
-def v2bf16   : VTVec<2,   bf16,  99>;  //    2 x bf16 vector value
-def v3bf16   : VTVec<3,   bf16, 100>;  //    3 x bf16 vector value
-def v4bf16   : VTVec<4,   bf16, 101>;  //    4 x bf16 vector value
-def v8bf16   : VTVec<8,   bf16, 102>;  //    8 x bf16 vector value
-def v16bf16  : VTVec<16,  bf16, 103>;  //   16 x bf16 vector value
-def v32bf16  : VTVec<32,  bf16, 104>;  //   32 x bf16 vector value
-def v64bf16  : VTVec<64,  bf16, 105>;  //   64 x bf16 vector value
-def v128bf16 : VTVec<128, bf16, 106>;  //  128 x bf16 vector value
-
-def v1f32    : VTVec<1,    f32, 107>;  //    1 x f32 vector value
-def v2f32    : VTVec<2,    f32, 108>;  //    2 x f32 vector value
-def v3f32    : VTVec<3,    f32, 109>;  //    3 x f32 vector value
-def v4f32    : VTVec<4,    f32, 110>;  //    4 x f32 vector value
-def v5f32    : VTVec<5,    f32, 111>;  //    5 x f32 vector value
-def v6f32    : VTVec<6,    f32, 112>;  //    6 x f32 vector value
-def v7f32    : VTVec<7,    f32, 113>;  //    7 x f32 vector value
-def v8f32    : VTVec<8,    f32, 114>;  //    8 x f32 vector value
-def v9f32    : VTVec<9,    f32, 115>;  //    9 x f32 vector value
-def v10f32   : VTVec<10,   f32, 116>;  //   10 x f32 vector value
-def v11f32   : VTVec<11,   f32, 117>;  //   11 x f32 vector value
-def v12f32   : VTVec<12,   f32, 118>;  //   12 x f32 vector value
-def v16f32   : VTVec<16,   f32, 119>;  //   16 x f32 vector value
-def v32f32   : VTVec<32,   f32, 120>;  //   32 x f32 vector value
-def v64f32   : VTVec<64,   f32, 121>;  //   64 x f32 vector value
-def v128f32  : VTVec<128,  f32, 122>;  //  128 x f32 vector value
-def v256f32  : VTVec<256,  f32, 123>;  //  256 x f32 vector value
-def v512f32  : VTVec<512,  f32, 124>;  //  512 x f32 vector value
-def v1024f32 : VTVec<1024, f32, 125>;  // 1024 x f32 vector value
-def v2048f32 : VTVec<2048, f32, 126>;  // 2048 x f32 vector value
-
-def v1f64    : VTVec<1,    f64, 127>;  //    1 x f64 vector value
-def v2f64    : VTVec<2,    f64, 128>;  //    2 x f64 vector value
-def v3f64    : VTVec<3,    f64, 129>;  //    3 x f64 vector value
-def v4f64    : VTVec<4,    f64, 130>;  //    4 x f64 vector value
-def v8f64    : VTVec<8,    f64, 131>;  //    8 x f64 vector value
-def v16f64   : VTVec<16,   f64, 132>;  //   16 x f64 vector value
-def v32f64   : VTVec<32,   f64, 133>;  //   32 x f64 vector value
-def v64f64   : VTVec<64,   f64, 134>;  //   64 x f64 vector value
-def v128f64  : VTVec<128,  f64, 135>;  //  128 x f64 vector value
-def v256f64  : VTVec<256,  f64, 136>;  //  256 x f64 vector value
-
-def nxv1i1  : VTScalableVec<1,  i1, 137>;  // n x  1 x i1  vector value
-def nxv2i1  : VTScalableVec<2,  i1, 138>;  // n x  2 x i1  vector value
-def nxv4i1  : VTScalableVec<4,  i1, 139>;  // n x  4 x i1  vector value
-def nxv8i1  : VTScalableVec<8,  i1, 140>;  // n x  8 x i1  vector value
-def nxv16i1 : VTScalableVec<16, i1, 141>;  // n x 16 x i1  vector value
-def nxv32i1 : VTScalableVec<32, i1, 142>;  // n x 32 x i1  vector value
-def nxv64i1 : VTScalableVec<64, i1, 143>;  // n x 64 x i1  vector value
-
-def nxv1i8  : VTScalableVec<1,  i8, 144>;  // n x  1 x i8  vector value
-def nxv2i8  : VTScalableVec<2,  i8, 145>;  // n x  2 x i8  vector value
-def nxv4i8  : VTScalableVec<4,  i8, 146>;  // n x  4 x i8  vector value
-def nxv8i8  : VTScalableVec<8,  i8, 147>;  // n x  8 x i8  vector value
-def nxv16i8 : VTScalableVec<16, i8, 148>;  // n x 16 x i8  vector value
-def nxv32i8 : VTScalableVec<32, i8, 149>;  // n x 32 x i8  vector value
-def nxv64i8 : VTScalableVec<64, i8, 150>;  // n x 64 x i8  vector value
-
-def nxv1i16  : VTScalableVec<1,  i16, 151>;  // n x  1 x i16 vector value
-def nxv2i16  : VTScalableVec<2,  i16, 152>;  // n x  2 x i16 vector value
-def nxv4i16  : VTScalableVec<4,  i16, 153>;  // n x  4 x i16 vector value
-def nxv8i16  : VTScalableVec<8,  i16, 154>;  // n x  8 x i16 vector value
-def nxv16i16 : VTScalableVec<16, i16, 155>;  // n x 16 x i16 vector value
-def nxv32i16 : VTScalableVec<32, i16, 156>;  // n x 32 x i16 vector value
-
-def nxv1i32  : VTScalableVec<1,  i32, 157>;  // n x  1 x i32 vector value
-def nxv2i32  : VTScalableVec<2,  i32, 158>;  // n x  2 x i32 vector value
-def nxv4i32  : VTScalableVec<4,  i32, 159>;  // n x  4 x i32 vector value
-def nxv8i32  : VTScalableVec<8,  i32, 160>;  // n x  8 x i32 vector value
-def nxv16i32 : VTScalableVec<16, i32, 161>;  // n x 16 x i32 vector value
-def nxv32i32 : VTScalableVec<32, i32, 162>;  // n x 32 x i32 vector value
-
-def nxv1i64  : VTScalableVec<1,  i64, 163>;  // n x  1 x i64 vector value
-def nxv2i64  : VTScalableVec<2,  i64, 164>;  // n x  2 x i64 vector value
-def nxv4i64  : VTScalableVec<4,  i64, 165>;  // n x  4 x i64 vector value
-def nxv8i64  : VTScalableVec<8,  i64, 166>;  // n x  8 x i64 vector value
-def nxv16i64 : VTScalableVec<16, i64, 167>;  // n x 16 x i64 vector value
-def nxv32i64 : VTScalableVec<32, i64, 168>;  // n x 32 x i64 vector value
-
-def nxv1f16  : VTScalableVec<1,  f16, 169>;  // n x  1 x  f16 vector value
-def nxv2f16  : VTScalableVec<2,  f16, 170>;  // n x  2 x  f16 vector value
-def nxv4f16  : VTScalableVec<4,  f16, 171>;  // n x  4 x  f16 vector value
-def nxv8f16  : VTScalableVec<8,  f16, 172>;  // n x  8 x  f16 vector value
-def nxv16f16 : VTScalableVec<16, f16, 173>;  // n x 16 x  f16 vector value
-def nxv32f16 : VTScalableVec<32, f16, 174>;  // n x 32 x  f16 vector value
-
-def nxv1bf16  : VTScalableVec<1,  bf16, 175>;  // n x  1 x bf16 vector value
-def nxv2bf16  : VTScalableVec<2,  bf16, 176>;  // n x  2 x bf16 vector value
-def nxv4bf16  : VTScalableVec<4,  bf16, 177>;  // n x  4 x bf16 vector value
-def nxv8bf16  : VTScalableVec<8,  bf16, 178>;  // n x  8 x bf16 vector value
-def nxv16bf16 : VTScalableVec<16, bf16, 179>;  // n x 16 x bf16 vector value
-def nxv32bf16 : VTScalableVec<32, bf16, 180>;  // n x 32 x bf16 vector value
-
-def nxv1f32  : VTScalableVec<1,  f32, 181>;  // n x  1 x  f32 vector value
-def nxv2f32  : VTScalableVec<2,  f32, 182>;  // n x  2 x  f32 vector value
-def nxv4f32  : VTScalableVec<4,  f32, 183>;  // n x  4 x  f32 vector value
-def nxv8f32  : VTScalableVec<8,  f32, 184>;  // n x  8 x  f32 vector value
-def nxv16f32 : VTScalableVec<16, f32, 185>;  // n x 16 x  f32 vector value
-
-def nxv1f64  : VTScalableVec<1,  f64, 186>;  // n x  1 x  f64 vector value
-def nxv2f64  : VTScalableVec<2,  f64, 187>;  // n x  2 x  f64 vector value
-def nxv4f64  : VTScalableVec<4,  f64, 188>;  // n x  4 x  f64 vector value
-def nxv8f64  : VTScalableVec<8,  f64, 189>;  // n x  8 x  f64 vector value
+def v1bf16   : VTVec<1,   bf16,  99>;  //    1 x bf16 vector value
+def v2bf16   : VTVec<2,   bf16, 100>;  //    2 x bf16 vector value
+def v3bf16   : VTVec<3,   bf16, 101>;  //    3 x bf16 vector value
+def v4bf16   : VTVec<4,   bf16, 102>;  //    4 x bf16 vector value
+def v8bf16   : VTVec<8,   bf16, 103>;  //    8 x bf16 vector value
+def v16bf16  : VTVec<16,  bf16, 104>;  //   16 x bf16 vector value
+def v32bf16  : VTVec<32,  bf16, 105>;  //   32 x bf16 vector value
+def v64bf16  : VTVec<64,  bf16, 106>;  //   64 x bf16 vector value
+def v128bf16 : VTVec<128, bf16, 107>;  //  128 x bf16 vector value
+
+def v1f32    : VTVec<1,    f32, 108>;  //    1 x f32 vector value
+def v2f32    : VTVec<2,    f32, 109>;  //    2 x f32 vector value
+def v3f32    : VTVec<3,    f32, 110>;  //    3 x f32 vector value
+def v4f32    : VTVec<4,    f32, 111>;  //    4 x f32 vector value
+def v5f32    : VTVec<5,    f32, 112>;  //    5 x f32 vector value
+def v6f32    : VTVec<6,    f32, 113>;  //    6 x f32 vector value
+def v7f32    : VTVec<7,    f32, 114>;  //    7 x f32 vector value
+def v8f32    : VTVec<8,    f32, 115>;  //    8 x f32 vector value
+def v9f32    : VTVec<9,    f32, 116>;  //    9 x f32 vector value
+def v10f32   : VTVec<10,   f32, 117>;  //   10 x f32 vector value
+def v11f32   : VTVec<11,   f32, 118>;  //   11 x f32 vector value
+def v12f32   : VTVec<12,   f32, 119>;  //   12 x f32 vector value
+def v16f32   : VTVec<16,   f32, 120>;  //   16 x f32 vector value
+def v32f32   : VTVec<32,   f32, 121>;  //   32 x f32 vector value
+def v64f32   : VTVec<64,   f32, 122>;  //   64 x f32 vector value
+def v128f32  : VTVec<128,  f32, 123>;  //  128 x f32 vector value
+def v256f32  : VTVec<256,  f32, 124>;  //  256 x f32 vector value
+def v512f32  : VTVec<512,  f32, 125>;  //  512 x f32 vector value
+def v1024f32 : VTVec<1024, f32, 126>;  // 1024 x f32 vector value
+def v2048f32 : VTVec<2048, f32, 127>;  // 2048 x f32 vector value
+
+def v1f64    : VTVec<1,    f64, 128>;  //    1 x f64 vector value
+def v2f64    : VTVec<2,    f64, 129>;  //    2 x f64 vector value
+def v3f64    : VTVec<3,    f64, 130>;  //    3 x f64 vector value
+def v4f64    : VTVec<4,    f64, 131>;  //    4 x f64 vector value
+def v8f64    : VTVec<8,    f64, 132>;  //    8 x f64 vector value
+def v16f64   : VTVec<16,   f64, 133>;  //   16 x f64 vector value
+def v32f64   : VTVec<32,   f64, 134>;  //   32 x f64 vector value
+def v64f64   : VTVec<64,   f64, 135>;  //   64 x f64 vector value
+def v128f64  : VTVec<128,  f64, 136>;  //  128 x f64 vector value
+def v256f64  : VTVec<256,  f64, 137>;  //  256 x f64 vector value
+
+def nxv1i1  : VTScalableVec<1,  i1, 138>;  // n x  1 x i1  vector value
+def nxv2i1  : VTScalableVec<2,  i1, 139>;  // n x  2 x i1  vector value
+def nxv4i1  : VTScalableVec<4,  i1, 140>;  // n x  4 x i1  vector value
+def nxv8i1  : VTScalableVec<8,  i1, 141>;  // n x  8 x i1  vector value
+def nxv16i1 : VTScalableVec<16, i1, 142>;  // n x 16 x i1  vector value
+def nxv32i1 : VTScalableVec<32, i1, 143>;  // n x 32 x i1  vector value
+def nxv64i1 : VTScalableVec<64, i1, 144>;  // n x 64 x i1  vector value
+
+def nxv1i8  : VTScalableVec<1,  i8, 145>;  // n x  1 x i8  vector value
+def nxv2i8  : VTScalableVec<2,  i8, 146>;  // n x  2 x i8  vector value
+def nxv4i8  : VTScalableVec<4,  i8, 147>;  // n x  4 x i8  vector value
+def nxv8i8  : VTScalableVec<8,  i8, 148>;  // n x  8 x i8  vector value
+def nxv16i8 : VTScalableVec<16, i8, 149>;  // n x 16 x i8  vector value
+def nxv32i8 : VTScalableVec<32, i8, 150>;  // n x 32 x i8  vector value
+def nxv64i8 : VTScalableVec<64, i8, 151>;  // n x 64 x i8  vector value
+
+def nxv1i16  : VTScalableVec<1,  i16, 152>;  // n x  1 x i16 vector value
+def nxv2i16  : VTScalableVec<2,  i16, 153>;  // n x  2 x i16 vector value
+def nxv4i16  : VTScalableVec<4,  i16, 154>;  // n x  4 x i16 vector value
+def nxv8i16  : VTScalableVec<8,  i16, 155>;  // n x  8 x i16 vector value
+def nxv16i16 : VTScalableVec<16, i16, 156>;  // n x 16 x i16 vector value
+def nxv32i16 : VTScalableVec<32, i16, 157>;  // n x 32 x i16 vector value
+
+def nxv1i32  : VTScalableVec<1,  i32, 158>;  // n x  1 x i32 vector value
+def nxv2i32  : VTScalableVec<2,  i32, 159>;  // n x  2 x i32 vector value
+def nxv4i32  : VTScalableVec<4,  i32, 160>;  // n x  4 x i32 vector value
+def nxv8i32  : VTScalableVec<8,  i32, 161>;  // n x  8 x i32 vector value
+def nxv16i32 : VTScalableVec<16, i32, 162>;  // n x 16 x i32 vector value
+def nxv32i32 : VTScalableVec<32, i32, 163>;  // n x 32 x i32 vector value
+
+def nxv1i64  : VTScalableVec<1,  i64, 164>;  // n x  1 x i64 vector value
+def nxv2i64  : VTScalableVec<2,  i64, 165>;  // n x  2 x i64 vector value
+def nxv4i64  : VTScalableVec<4,  i64, 166>;  // n x  4 x i64 vector value
+def nxv8i64  : VTScalableVec<8,  i64, 167>;  // n x  8 x i64 vector value
+def nxv16i64 : VTScalableVec<16, i64, 168>;  // n x 16 x i64 vector value
+def nxv32i64 : VTScalableVec<32, i64, 169>;  // n x 32 x i64 vector value
+
+def nxv1f16  : VTScalableVec<1,  f16, 170>;  // n x  1 x  f16 vector value
+def nxv2f16  : VTScalableVec<2,  f16, 171>;  // n x  2 x  f16 vector value
+def nxv4f16  : VTScalableVec<4,  f16, 172>;  // n x  4 x  f16 vector value
+def nxv8f16  : VTScalableVec<8,  f16, 173>;  // n x  8 x  f16 vector value
+def nxv16f16 : VTScalableVec<16, f16, 174>;  // n x 16 x  f16 vector value
+def nxv32f16 : VTScalableVec<32, f16, 175>;  // n x 32 x  f16 vector value
+
+def nxv1bf16  : VTScalableVec<1,  bf16, 176>;  // n x  1 x bf16 vector value
+def nxv2bf16  : VTScalableVec<2,  bf16, 177>;  // n x  2 x bf16 vector value
+def nxv4bf16  : VTScalableVec<4,  bf16, 178>;  // n x  4 x bf16 vector value
+def nxv8bf16  : VTScalableVec<8,  bf16, 179>;  // n x  8 x bf16 vector value
+def nxv16bf16 : VTScalableVec<16, bf16, 180>;  // n x 16 x bf16 vector value
+def nxv32bf16 : VTScalableVec<32, bf16, 181>;  // n x 32 x bf16 vector value
+
+def nxv1f32  : VTScalableVec<1,  f32, 182>;  // n x  1 x  f32 vector value
+def nxv2f32  : VTScalableVec<2,  f32, 183>;  // n x  2 x  f32 vector value
+def nxv4f32  : VTScalableVec<4,  f32, 184>;  // n x  4 x  f32 vector value
+def nxv8f32  : VTScalableVec<8,  f32, 185>;  // n x  8 x  f32 vector value
+def nxv16f32 : VTScalableVec<16, f32, 186>;  // n x 16 x  f32 vector value
+
+def nxv1f64  : VTScalableVec<1,  f64, 187>;  // n x  1 x  f64 vector value
+def nxv2f64  : VTScalableVec<2,  f64, 188>;  // n x  2 x  f64 vector value
+def nxv4f64  : VTScalableVec<4,  f64, 189>;  // n x  4 x  f64 vector value
+def nxv8f64  : VTScalableVec<8,  f64, 190>;  // n x  8 x  f64 vector value
 
 // Sz = NF * MinNumElts * 8(bits)
-def riscv_nxv1i8x2   : VTVecTup<16, 2, i8, 190>;  // RISCV vector tuple(min_num_elts=1, nf=2)
-def riscv_nxv1i8x3   : VTVecTup<24, 3, i8, 191>;  // RISCV vector tuple(min_num_elts=1, nf=3)
-def riscv_nxv1i8x4   : VTVecTup<32, 4, i8, 192>;  // RISCV vector tuple(min_num_elts=1, nf=4)
-def riscv_nxv1i8x5   : VTVecTup<40, 5, i8, 193>;  // RISCV vector tuple(min_num_elts=1, nf=5)
-def riscv_nxv1i8x6   : VTVecTup<48, 6, i8, 194>;  // RISCV vector tuple(min_num_elts=1, nf=6)
-def riscv_nxv1i8x7   : VTVecTup<56, 7, i8, 195>;  // RISCV vector tuple(min_num_elts=1, nf=7)
-def riscv_nxv1i8x8   : VTVecTup<64, 8, i8, 196>;  // RISCV vector tuple(min_num_elts=1, nf=8)
-def riscv_nxv2i8x2   : VTVecTup<32, 2, i8, 197>;  // RISCV vector tuple(min_num_elts=2, nf=2)
-def riscv_nxv2i8x3   : VTVecTup<48, 3, i8, 198>;  // RISCV vector tuple(min_num_elts=2, nf=3)
-def riscv_nxv2i8x4   : VTVecTup<64, 4, i8, 199>;  // RISCV vector tuple(min_num_elts=2, nf=4)
-def riscv_nxv2i8x5   : VTVecTup<80, 5, i8, 200>;  // RISCV vector tuple(min_num_elts=2, nf=5)
-def riscv_nxv2i8x6   : VTVecTup<96, 6, i8, 201>;  // RISCV vector tuple(min_num_elts=2, nf=6)
-def riscv_nxv2i8x7   : VTVecTup<112, 7, i8, 202>; // RISCV vector tuple(min_num_elts=2, nf=7)
-def riscv_nxv2i8x8   : VTVecTup<128, 8, i8, 203>; // RISCV vector tuple(min_num_elts=2, nf=8)
-def riscv_nxv4i8x2   : VTVecTup<64, 2, i8, 204>;  // RISCV vector tuple(min_num_elts=4, nf=2)
-def riscv_nxv4i8x3   : VTVecTup<96, 3, i8, 205>;  // RISCV vector tuple(min_num_elts=4, nf=3)
-def riscv_nxv4i8x4   : VTVecTup<128, 4, i8, 206>; // RISCV vector tuple(min_num_elts=4, nf=4)
-def riscv_nxv4i8x5   : VTVecTup<160, 5, i8, 207>; // RISCV vector tuple(min_num_elts=4, nf=5)
-def riscv_nxv4i8x6   : VTVecTup<192, 6, i8, 208>; // RISCV vector tuple(min_num_elts=4, nf=6)
-def riscv_nxv4i8x7   : VTVecTup<224, 7, i8, 209>; // RISCV vector tuple(min_num_elts=4, nf=7)
-def riscv_nxv4i8x8   : VTVecTup<256, 8, i8, 210>; // RISCV vector tuple(min_num_elts=4, nf=8)
-def riscv_nxv8i8x2   : VTVecTup<128, 2, i8, 211>; // RISCV vector tuple(min_num_elts=8, nf=2)
-def riscv_nxv8i8x3   : VTVecTup<192, 3, i8, 212>; // RISCV vector tuple(min_num_elts=8, nf=3)
-def riscv_nxv8i8x4   : VTVecTup<256, 4, i8, 213>; // RISCV vector tuple(min_num_elts=8, nf=4)
-def riscv_nxv8i8x5   : VTVecTup<320, 5, i8, 214>; // RISCV vector tuple(min_num_elts=8, nf=5)
-def riscv_nxv8i8x6   : VTVecTup<384, 6, i8, 215>; // RISCV vector tuple(min_num_elts=8, nf=6)
-def riscv_nxv8i8x7   : VTVecTup<448, 7, i8, 216>; // RISCV vector tuple(min_num_elts=8, nf=7)
-def riscv_nxv8i8x8   : VTVecTup<512, 8, i8, 217>; // RISCV vector tuple(min_num_elts=8, nf=8)
-def riscv_nxv16i8x2  : VTVecTup<256, 2, i8, 218>; // RISCV vector tuple(min_num_elts=16, nf=2)
-def riscv_nxv16i8x3  : VTVecTup<384, 3, i8, 219>; // RISCV vector tuple(min_num_elts=16, nf=3)
-def riscv_nxv16i8x4  : VTVecTup<512, 4, i8, 220>; // RISCV vector tuple(min_num_elts=16, nf=4)
-def riscv_nxv32i8x2  : VTVecTup<512, 2, i8, 221>; // RISCV vector tuple(min_num_elts=32, nf=2)
-
-def x86mmx    : ValueType<64,   222>;  // X86 MMX value
-def Glue      : ValueType<0,    223>;  // Pre-RA sched glue
-def isVoid    : ValueType<0,    224>;  // Produces no value
-def untyped   : ValueType<8,    225> { // Produces an untyped value
+def riscv_nxv1i8x2   : VTVecTup<16, 2, i8, 191>;  // RISCV vector tuple(min_num_elts=1, nf=2)
+def riscv_nxv1i8x3   : VTVecTup<24, 3, i8, 192>;  // RISCV vector tuple(min_num_elts=1, nf=3)
+def riscv_nxv1i8x4   : VTVecTup<32, 4, i8, 193>;  // RISCV vector tuple(min_num_elts=1, nf=4)
+def riscv_nxv1i8x5   : VTVecTup<40, 5, i8, 194>;  // RISCV vector tuple(min_num_elts=1, nf=5)
+def riscv_nxv1i8x6   : VTVecTup<48, 6, i8, 195>;  // RISCV vector tuple(min_num_elts=1, nf=6)
+def riscv_nxv1i8x7   : VTVecTup<56, 7, i8, 196>;  // RISCV vector tuple(min_num_elts=1, nf=7)
+def riscv_nxv1i8x8   : VTVecTup<64, 8, i8, 197>;  // RISCV vector tuple(min_num_elts=1, nf=8)
+def riscv_nxv2i8x2   : VTVecTup<32, 2, i8, 198>;  // RISCV vector tuple(min_num_elts=2, nf=2)
+def riscv_nxv2i8x3   : VTVecTup<48, 3, i8, 199>;  // RISCV vector tuple(min_num_elts=2, nf=3)
+def riscv_nxv2i8x4   : VTVecTup<64, 4, i8, 200>;  // RISCV vector tuple(min_num_elts=2, nf=4)
+def riscv_nxv2i8x5   : VTVecTup<80, 5, i8, 201>;  // RISCV vector tuple(min_num_elts=2, nf=5)
+def riscv_nxv2i8x6   : VTVecTup<96, 6, i8, 202>;  // RISCV vector tuple(min_num_elts=2, nf=6)
+def riscv_nxv2i8x7   : VTVecTup<112, 7, i8, 203>; // RISCV vector tuple(min_num_elts=2, nf=7)
+def riscv_nxv2i8x8   : VTVecTup<128, 8, i8, 204>; // RISCV vector tuple(min_num_elts=2, nf=8)
+def riscv_nxv4i8x2   : VTVecTup<64, 2, i8, 205>;  // RISCV vector tuple(min_num_elts=4, nf=2)
+def riscv_nxv4i8x3   : VTVecTup<96, 3, i8, 206>;  // RISCV vector tuple(min_num_elts=4, nf=3)
+def riscv_nxv4i8x4   : VTVecTup<128, 4, i8, 207>; // RISCV vector tuple(min_num_elts=4, nf=4)
+def riscv_nxv4i8x5   : VTVecTup<160, 5, i8, 208>; // RISCV vector tuple(min_num_elts=4, nf=5)
+def riscv_nxv4i8x6   : VTVecTup<192, 6, i8, 209>; // RISCV vector tuple(min_num_elts=4, nf=6)
+def riscv_nxv4i8x7   : VTVecTup<224, 7, i8, 210>; // RISCV vector tuple(min_num_elts=4, nf=7)
+def riscv_nxv4i8x8   : VTVecTup<256, 8, i8, 211>; // RISCV vector tuple(min_num_elts=4, nf=8)
+def riscv_nxv8i8x2   : VTVecTup<128, 2, i8, 212>; // RISCV vector tuple(min_num_elts=8, nf=2)
+def riscv_nxv8i8x3   : VTVecTup<192, 3, i8, 213>; // RISCV vector tuple(min_num_elts=8, nf=3)
+def riscv_nxv8i8x4   : VTVecTup<256, 4, i8, 214>; // RISCV vector tuple(min_num_elts=8, nf=4)
+def riscv_nxv8i8x5   : VTVecTup<320, 5, i8, 215>; // RISCV vector tuple(min_num_elts=8, nf=5)
+def riscv_nxv8i8x6   : VTVecTup<384, 6, i8, 216>; // RISCV vector tuple(min_num_elts=8, nf=6)
+def riscv_nxv8i8x7   : VTVecTup<448, 7, i8, 217>; // RISCV vector tuple(min_num_elts=8, nf=7)
+def riscv_nxv8i8x8   : VTVecTup<512, 8, i8, 218>; // RISCV vector tuple(min_num_elts=8, nf=8)
+def riscv_nxv16i8x2  : VTVecTup<256, 2, i8, 219>; // RISCV vector tuple(min_num_elts=16, nf=2)
+def riscv_nxv16i8x3  : VTVecTup<384, 3, i8, 220>; // RISCV vector tuple(min_num_elts=16, nf=3)
+def riscv_nxv16i8x4  : VTVecTup<512, 4, i8, 221>; // RISCV vector tuple(min_num_elts=16, nf=4)
+def riscv_nxv32i8x2  : VTVecTup<512, 2, i8, 222>; // RISCV vector tuple(min_num_elts=32, nf=2)
+
+def x86mmx    : ValueType<64,   223>;  // X86 MMX value
+def Glue      : ValueType<0,    224>;  // Pre-RA sched glue
+def isVoid    : ValueType<0,    225>;  // Produces no value
+def untyped   : ValueType<8,    226> { // Produces an untyped value
   let LLVMName = "Untyped";
 }
-def funcref   : ValueType<0,    226>;  // WebAssembly's funcref type
-def externref : ValueType<0,    227>;  // WebAssembly's externref type
-def exnref    : ValueType<0,    228>;  // W...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Oct 4, 2024

@llvm/pr-subscribers-llvm-transforms

Author: Luke Lau (lukel97)

Changes

When trying to add RISC-V fadd reduction cost model tests for bf16, I noticed a crash when the vector was of <1 x bfloat>.

It turns out that this was being scalarized because unlike f16/f32/f64, there's no v1bf16 value type, and the existing cost model code assumed that the legalized type would always be a vector.

This adds v1bf16 to bring bf16 in line with the other fp types.

It also adds some more RISC-V bf16 reduction tests which previously crashed, including tests to ensure that SLP won't emit fadd/fmul reductions for bf16 or f16 w/ zvfhmin after #111000.


Patch is 61.71 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/111112.diff

6 Files Affected:

  • (modified) llvm/include/llvm/CodeGen/ValueTypes.td (+147-146)
  • (modified) llvm/test/Analysis/CostModel/RISCV/arith-fp.ll (+3-3)
  • (modified) llvm/test/Analysis/CostModel/RISCV/reduce-fadd.ll (+71-3)
  • (modified) llvm/test/Analysis/CostModel/RISCV/reduce-fmul.ll (+71-3)
  • (modified) llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfwmaccbf16.ll (+26-188)
  • (modified) llvm/test/Transforms/SLPVectorizer/RISCV/reductions.ll (+152-9)
diff --git a/llvm/include/llvm/CodeGen/ValueTypes.td b/llvm/include/llvm/CodeGen/ValueTypes.td
index ea2c80eaf95836..493c0cfcab60ce 100644
--- a/llvm/include/llvm/CodeGen/ValueTypes.td
+++ b/llvm/include/llvm/CodeGen/ValueTypes.td
@@ -179,157 +179,158 @@ def v128f16  : VTVec<128,  f16,  96>;  //  128 x f16 vector value
 def v256f16  : VTVec<256,  f16,  97>;  //  256 x f16 vector value
 def v512f16  : VTVec<512,  f16,  98>;  //  512 x f16 vector value
 
-def v2bf16   : VTVec<2,   bf16,  99>;  //    2 x bf16 vector value
-def v3bf16   : VTVec<3,   bf16, 100>;  //    3 x bf16 vector value
-def v4bf16   : VTVec<4,   bf16, 101>;  //    4 x bf16 vector value
-def v8bf16   : VTVec<8,   bf16, 102>;  //    8 x bf16 vector value
-def v16bf16  : VTVec<16,  bf16, 103>;  //   16 x bf16 vector value
-def v32bf16  : VTVec<32,  bf16, 104>;  //   32 x bf16 vector value
-def v64bf16  : VTVec<64,  bf16, 105>;  //   64 x bf16 vector value
-def v128bf16 : VTVec<128, bf16, 106>;  //  128 x bf16 vector value
-
-def v1f32    : VTVec<1,    f32, 107>;  //    1 x f32 vector value
-def v2f32    : VTVec<2,    f32, 108>;  //    2 x f32 vector value
-def v3f32    : VTVec<3,    f32, 109>;  //    3 x f32 vector value
-def v4f32    : VTVec<4,    f32, 110>;  //    4 x f32 vector value
-def v5f32    : VTVec<5,    f32, 111>;  //    5 x f32 vector value
-def v6f32    : VTVec<6,    f32, 112>;  //    6 x f32 vector value
-def v7f32    : VTVec<7,    f32, 113>;  //    7 x f32 vector value
-def v8f32    : VTVec<8,    f32, 114>;  //    8 x f32 vector value
-def v9f32    : VTVec<9,    f32, 115>;  //    9 x f32 vector value
-def v10f32   : VTVec<10,   f32, 116>;  //   10 x f32 vector value
-def v11f32   : VTVec<11,   f32, 117>;  //   11 x f32 vector value
-def v12f32   : VTVec<12,   f32, 118>;  //   12 x f32 vector value
-def v16f32   : VTVec<16,   f32, 119>;  //   16 x f32 vector value
-def v32f32   : VTVec<32,   f32, 120>;  //   32 x f32 vector value
-def v64f32   : VTVec<64,   f32, 121>;  //   64 x f32 vector value
-def v128f32  : VTVec<128,  f32, 122>;  //  128 x f32 vector value
-def v256f32  : VTVec<256,  f32, 123>;  //  256 x f32 vector value
-def v512f32  : VTVec<512,  f32, 124>;  //  512 x f32 vector value
-def v1024f32 : VTVec<1024, f32, 125>;  // 1024 x f32 vector value
-def v2048f32 : VTVec<2048, f32, 126>;  // 2048 x f32 vector value
-
-def v1f64    : VTVec<1,    f64, 127>;  //    1 x f64 vector value
-def v2f64    : VTVec<2,    f64, 128>;  //    2 x f64 vector value
-def v3f64    : VTVec<3,    f64, 129>;  //    3 x f64 vector value
-def v4f64    : VTVec<4,    f64, 130>;  //    4 x f64 vector value
-def v8f64    : VTVec<8,    f64, 131>;  //    8 x f64 vector value
-def v16f64   : VTVec<16,   f64, 132>;  //   16 x f64 vector value
-def v32f64   : VTVec<32,   f64, 133>;  //   32 x f64 vector value
-def v64f64   : VTVec<64,   f64, 134>;  //   64 x f64 vector value
-def v128f64  : VTVec<128,  f64, 135>;  //  128 x f64 vector value
-def v256f64  : VTVec<256,  f64, 136>;  //  256 x f64 vector value
-
-def nxv1i1  : VTScalableVec<1,  i1, 137>;  // n x  1 x i1  vector value
-def nxv2i1  : VTScalableVec<2,  i1, 138>;  // n x  2 x i1  vector value
-def nxv4i1  : VTScalableVec<4,  i1, 139>;  // n x  4 x i1  vector value
-def nxv8i1  : VTScalableVec<8,  i1, 140>;  // n x  8 x i1  vector value
-def nxv16i1 : VTScalableVec<16, i1, 141>;  // n x 16 x i1  vector value
-def nxv32i1 : VTScalableVec<32, i1, 142>;  // n x 32 x i1  vector value
-def nxv64i1 : VTScalableVec<64, i1, 143>;  // n x 64 x i1  vector value
-
-def nxv1i8  : VTScalableVec<1,  i8, 144>;  // n x  1 x i8  vector value
-def nxv2i8  : VTScalableVec<2,  i8, 145>;  // n x  2 x i8  vector value
-def nxv4i8  : VTScalableVec<4,  i8, 146>;  // n x  4 x i8  vector value
-def nxv8i8  : VTScalableVec<8,  i8, 147>;  // n x  8 x i8  vector value
-def nxv16i8 : VTScalableVec<16, i8, 148>;  // n x 16 x i8  vector value
-def nxv32i8 : VTScalableVec<32, i8, 149>;  // n x 32 x i8  vector value
-def nxv64i8 : VTScalableVec<64, i8, 150>;  // n x 64 x i8  vector value
-
-def nxv1i16  : VTScalableVec<1,  i16, 151>;  // n x  1 x i16 vector value
-def nxv2i16  : VTScalableVec<2,  i16, 152>;  // n x  2 x i16 vector value
-def nxv4i16  : VTScalableVec<4,  i16, 153>;  // n x  4 x i16 vector value
-def nxv8i16  : VTScalableVec<8,  i16, 154>;  // n x  8 x i16 vector value
-def nxv16i16 : VTScalableVec<16, i16, 155>;  // n x 16 x i16 vector value
-def nxv32i16 : VTScalableVec<32, i16, 156>;  // n x 32 x i16 vector value
-
-def nxv1i32  : VTScalableVec<1,  i32, 157>;  // n x  1 x i32 vector value
-def nxv2i32  : VTScalableVec<2,  i32, 158>;  // n x  2 x i32 vector value
-def nxv4i32  : VTScalableVec<4,  i32, 159>;  // n x  4 x i32 vector value
-def nxv8i32  : VTScalableVec<8,  i32, 160>;  // n x  8 x i32 vector value
-def nxv16i32 : VTScalableVec<16, i32, 161>;  // n x 16 x i32 vector value
-def nxv32i32 : VTScalableVec<32, i32, 162>;  // n x 32 x i32 vector value
-
-def nxv1i64  : VTScalableVec<1,  i64, 163>;  // n x  1 x i64 vector value
-def nxv2i64  : VTScalableVec<2,  i64, 164>;  // n x  2 x i64 vector value
-def nxv4i64  : VTScalableVec<4,  i64, 165>;  // n x  4 x i64 vector value
-def nxv8i64  : VTScalableVec<8,  i64, 166>;  // n x  8 x i64 vector value
-def nxv16i64 : VTScalableVec<16, i64, 167>;  // n x 16 x i64 vector value
-def nxv32i64 : VTScalableVec<32, i64, 168>;  // n x 32 x i64 vector value
-
-def nxv1f16  : VTScalableVec<1,  f16, 169>;  // n x  1 x  f16 vector value
-def nxv2f16  : VTScalableVec<2,  f16, 170>;  // n x  2 x  f16 vector value
-def nxv4f16  : VTScalableVec<4,  f16, 171>;  // n x  4 x  f16 vector value
-def nxv8f16  : VTScalableVec<8,  f16, 172>;  // n x  8 x  f16 vector value
-def nxv16f16 : VTScalableVec<16, f16, 173>;  // n x 16 x  f16 vector value
-def nxv32f16 : VTScalableVec<32, f16, 174>;  // n x 32 x  f16 vector value
-
-def nxv1bf16  : VTScalableVec<1,  bf16, 175>;  // n x  1 x bf16 vector value
-def nxv2bf16  : VTScalableVec<2,  bf16, 176>;  // n x  2 x bf16 vector value
-def nxv4bf16  : VTScalableVec<4,  bf16, 177>;  // n x  4 x bf16 vector value
-def nxv8bf16  : VTScalableVec<8,  bf16, 178>;  // n x  8 x bf16 vector value
-def nxv16bf16 : VTScalableVec<16, bf16, 179>;  // n x 16 x bf16 vector value
-def nxv32bf16 : VTScalableVec<32, bf16, 180>;  // n x 32 x bf16 vector value
-
-def nxv1f32  : VTScalableVec<1,  f32, 181>;  // n x  1 x  f32 vector value
-def nxv2f32  : VTScalableVec<2,  f32, 182>;  // n x  2 x  f32 vector value
-def nxv4f32  : VTScalableVec<4,  f32, 183>;  // n x  4 x  f32 vector value
-def nxv8f32  : VTScalableVec<8,  f32, 184>;  // n x  8 x  f32 vector value
-def nxv16f32 : VTScalableVec<16, f32, 185>;  // n x 16 x  f32 vector value
-
-def nxv1f64  : VTScalableVec<1,  f64, 186>;  // n x  1 x  f64 vector value
-def nxv2f64  : VTScalableVec<2,  f64, 187>;  // n x  2 x  f64 vector value
-def nxv4f64  : VTScalableVec<4,  f64, 188>;  // n x  4 x  f64 vector value
-def nxv8f64  : VTScalableVec<8,  f64, 189>;  // n x  8 x  f64 vector value
+def v1bf16   : VTVec<1,   bf16,  99>;  //    1 x bf16 vector value
+def v2bf16   : VTVec<2,   bf16, 100>;  //    2 x bf16 vector value
+def v3bf16   : VTVec<3,   bf16, 101>;  //    3 x bf16 vector value
+def v4bf16   : VTVec<4,   bf16, 102>;  //    4 x bf16 vector value
+def v8bf16   : VTVec<8,   bf16, 103>;  //    8 x bf16 vector value
+def v16bf16  : VTVec<16,  bf16, 104>;  //   16 x bf16 vector value
+def v32bf16  : VTVec<32,  bf16, 105>;  //   32 x bf16 vector value
+def v64bf16  : VTVec<64,  bf16, 106>;  //   64 x bf16 vector value
+def v128bf16 : VTVec<128, bf16, 107>;  //  128 x bf16 vector value
+
+def v1f32    : VTVec<1,    f32, 108>;  //    1 x f32 vector value
+def v2f32    : VTVec<2,    f32, 109>;  //    2 x f32 vector value
+def v3f32    : VTVec<3,    f32, 110>;  //    3 x f32 vector value
+def v4f32    : VTVec<4,    f32, 111>;  //    4 x f32 vector value
+def v5f32    : VTVec<5,    f32, 112>;  //    5 x f32 vector value
+def v6f32    : VTVec<6,    f32, 113>;  //    6 x f32 vector value
+def v7f32    : VTVec<7,    f32, 114>;  //    7 x f32 vector value
+def v8f32    : VTVec<8,    f32, 115>;  //    8 x f32 vector value
+def v9f32    : VTVec<9,    f32, 116>;  //    9 x f32 vector value
+def v10f32   : VTVec<10,   f32, 117>;  //   10 x f32 vector value
+def v11f32   : VTVec<11,   f32, 118>;  //   11 x f32 vector value
+def v12f32   : VTVec<12,   f32, 119>;  //   12 x f32 vector value
+def v16f32   : VTVec<16,   f32, 120>;  //   16 x f32 vector value
+def v32f32   : VTVec<32,   f32, 121>;  //   32 x f32 vector value
+def v64f32   : VTVec<64,   f32, 122>;  //   64 x f32 vector value
+def v128f32  : VTVec<128,  f32, 123>;  //  128 x f32 vector value
+def v256f32  : VTVec<256,  f32, 124>;  //  256 x f32 vector value
+def v512f32  : VTVec<512,  f32, 125>;  //  512 x f32 vector value
+def v1024f32 : VTVec<1024, f32, 126>;  // 1024 x f32 vector value
+def v2048f32 : VTVec<2048, f32, 127>;  // 2048 x f32 vector value
+
+def v1f64    : VTVec<1,    f64, 128>;  //    1 x f64 vector value
+def v2f64    : VTVec<2,    f64, 129>;  //    2 x f64 vector value
+def v3f64    : VTVec<3,    f64, 130>;  //    3 x f64 vector value
+def v4f64    : VTVec<4,    f64, 131>;  //    4 x f64 vector value
+def v8f64    : VTVec<8,    f64, 132>;  //    8 x f64 vector value
+def v16f64   : VTVec<16,   f64, 133>;  //   16 x f64 vector value
+def v32f64   : VTVec<32,   f64, 134>;  //   32 x f64 vector value
+def v64f64   : VTVec<64,   f64, 135>;  //   64 x f64 vector value
+def v128f64  : VTVec<128,  f64, 136>;  //  128 x f64 vector value
+def v256f64  : VTVec<256,  f64, 137>;  //  256 x f64 vector value
+
+def nxv1i1  : VTScalableVec<1,  i1, 138>;  // n x  1 x i1  vector value
+def nxv2i1  : VTScalableVec<2,  i1, 139>;  // n x  2 x i1  vector value
+def nxv4i1  : VTScalableVec<4,  i1, 140>;  // n x  4 x i1  vector value
+def nxv8i1  : VTScalableVec<8,  i1, 141>;  // n x  8 x i1  vector value
+def nxv16i1 : VTScalableVec<16, i1, 142>;  // n x 16 x i1  vector value
+def nxv32i1 : VTScalableVec<32, i1, 143>;  // n x 32 x i1  vector value
+def nxv64i1 : VTScalableVec<64, i1, 144>;  // n x 64 x i1  vector value
+
+def nxv1i8  : VTScalableVec<1,  i8, 145>;  // n x  1 x i8  vector value
+def nxv2i8  : VTScalableVec<2,  i8, 146>;  // n x  2 x i8  vector value
+def nxv4i8  : VTScalableVec<4,  i8, 147>;  // n x  4 x i8  vector value
+def nxv8i8  : VTScalableVec<8,  i8, 148>;  // n x  8 x i8  vector value
+def nxv16i8 : VTScalableVec<16, i8, 149>;  // n x 16 x i8  vector value
+def nxv32i8 : VTScalableVec<32, i8, 150>;  // n x 32 x i8  vector value
+def nxv64i8 : VTScalableVec<64, i8, 151>;  // n x 64 x i8  vector value
+
+def nxv1i16  : VTScalableVec<1,  i16, 152>;  // n x  1 x i16 vector value
+def nxv2i16  : VTScalableVec<2,  i16, 153>;  // n x  2 x i16 vector value
+def nxv4i16  : VTScalableVec<4,  i16, 154>;  // n x  4 x i16 vector value
+def nxv8i16  : VTScalableVec<8,  i16, 155>;  // n x  8 x i16 vector value
+def nxv16i16 : VTScalableVec<16, i16, 156>;  // n x 16 x i16 vector value
+def nxv32i16 : VTScalableVec<32, i16, 157>;  // n x 32 x i16 vector value
+
+def nxv1i32  : VTScalableVec<1,  i32, 158>;  // n x  1 x i32 vector value
+def nxv2i32  : VTScalableVec<2,  i32, 159>;  // n x  2 x i32 vector value
+def nxv4i32  : VTScalableVec<4,  i32, 160>;  // n x  4 x i32 vector value
+def nxv8i32  : VTScalableVec<8,  i32, 161>;  // n x  8 x i32 vector value
+def nxv16i32 : VTScalableVec<16, i32, 162>;  // n x 16 x i32 vector value
+def nxv32i32 : VTScalableVec<32, i32, 163>;  // n x 32 x i32 vector value
+
+def nxv1i64  : VTScalableVec<1,  i64, 164>;  // n x  1 x i64 vector value
+def nxv2i64  : VTScalableVec<2,  i64, 165>;  // n x  2 x i64 vector value
+def nxv4i64  : VTScalableVec<4,  i64, 166>;  // n x  4 x i64 vector value
+def nxv8i64  : VTScalableVec<8,  i64, 167>;  // n x  8 x i64 vector value
+def nxv16i64 : VTScalableVec<16, i64, 168>;  // n x 16 x i64 vector value
+def nxv32i64 : VTScalableVec<32, i64, 169>;  // n x 32 x i64 vector value
+
+def nxv1f16  : VTScalableVec<1,  f16, 170>;  // n x  1 x  f16 vector value
+def nxv2f16  : VTScalableVec<2,  f16, 171>;  // n x  2 x  f16 vector value
+def nxv4f16  : VTScalableVec<4,  f16, 172>;  // n x  4 x  f16 vector value
+def nxv8f16  : VTScalableVec<8,  f16, 173>;  // n x  8 x  f16 vector value
+def nxv16f16 : VTScalableVec<16, f16, 174>;  // n x 16 x  f16 vector value
+def nxv32f16 : VTScalableVec<32, f16, 175>;  // n x 32 x  f16 vector value
+
+def nxv1bf16  : VTScalableVec<1,  bf16, 176>;  // n x  1 x bf16 vector value
+def nxv2bf16  : VTScalableVec<2,  bf16, 177>;  // n x  2 x bf16 vector value
+def nxv4bf16  : VTScalableVec<4,  bf16, 178>;  // n x  4 x bf16 vector value
+def nxv8bf16  : VTScalableVec<8,  bf16, 179>;  // n x  8 x bf16 vector value
+def nxv16bf16 : VTScalableVec<16, bf16, 180>;  // n x 16 x bf16 vector value
+def nxv32bf16 : VTScalableVec<32, bf16, 181>;  // n x 32 x bf16 vector value
+
+def nxv1f32  : VTScalableVec<1,  f32, 182>;  // n x  1 x  f32 vector value
+def nxv2f32  : VTScalableVec<2,  f32, 183>;  // n x  2 x  f32 vector value
+def nxv4f32  : VTScalableVec<4,  f32, 184>;  // n x  4 x  f32 vector value
+def nxv8f32  : VTScalableVec<8,  f32, 185>;  // n x  8 x  f32 vector value
+def nxv16f32 : VTScalableVec<16, f32, 186>;  // n x 16 x  f32 vector value
+
+def nxv1f64  : VTScalableVec<1,  f64, 187>;  // n x  1 x  f64 vector value
+def nxv2f64  : VTScalableVec<2,  f64, 188>;  // n x  2 x  f64 vector value
+def nxv4f64  : VTScalableVec<4,  f64, 189>;  // n x  4 x  f64 vector value
+def nxv8f64  : VTScalableVec<8,  f64, 190>;  // n x  8 x  f64 vector value
 
 // Sz = NF * MinNumElts * 8(bits)
-def riscv_nxv1i8x2   : VTVecTup<16, 2, i8, 190>;  // RISCV vector tuple(min_num_elts=1, nf=2)
-def riscv_nxv1i8x3   : VTVecTup<24, 3, i8, 191>;  // RISCV vector tuple(min_num_elts=1, nf=3)
-def riscv_nxv1i8x4   : VTVecTup<32, 4, i8, 192>;  // RISCV vector tuple(min_num_elts=1, nf=4)
-def riscv_nxv1i8x5   : VTVecTup<40, 5, i8, 193>;  // RISCV vector tuple(min_num_elts=1, nf=5)
-def riscv_nxv1i8x6   : VTVecTup<48, 6, i8, 194>;  // RISCV vector tuple(min_num_elts=1, nf=6)
-def riscv_nxv1i8x7   : VTVecTup<56, 7, i8, 195>;  // RISCV vector tuple(min_num_elts=1, nf=7)
-def riscv_nxv1i8x8   : VTVecTup<64, 8, i8, 196>;  // RISCV vector tuple(min_num_elts=1, nf=8)
-def riscv_nxv2i8x2   : VTVecTup<32, 2, i8, 197>;  // RISCV vector tuple(min_num_elts=2, nf=2)
-def riscv_nxv2i8x3   : VTVecTup<48, 3, i8, 198>;  // RISCV vector tuple(min_num_elts=2, nf=3)
-def riscv_nxv2i8x4   : VTVecTup<64, 4, i8, 199>;  // RISCV vector tuple(min_num_elts=2, nf=4)
-def riscv_nxv2i8x5   : VTVecTup<80, 5, i8, 200>;  // RISCV vector tuple(min_num_elts=2, nf=5)
-def riscv_nxv2i8x6   : VTVecTup<96, 6, i8, 201>;  // RISCV vector tuple(min_num_elts=2, nf=6)
-def riscv_nxv2i8x7   : VTVecTup<112, 7, i8, 202>; // RISCV vector tuple(min_num_elts=2, nf=7)
-def riscv_nxv2i8x8   : VTVecTup<128, 8, i8, 203>; // RISCV vector tuple(min_num_elts=2, nf=8)
-def riscv_nxv4i8x2   : VTVecTup<64, 2, i8, 204>;  // RISCV vector tuple(min_num_elts=4, nf=2)
-def riscv_nxv4i8x3   : VTVecTup<96, 3, i8, 205>;  // RISCV vector tuple(min_num_elts=4, nf=3)
-def riscv_nxv4i8x4   : VTVecTup<128, 4, i8, 206>; // RISCV vector tuple(min_num_elts=4, nf=4)
-def riscv_nxv4i8x5   : VTVecTup<160, 5, i8, 207>; // RISCV vector tuple(min_num_elts=4, nf=5)
-def riscv_nxv4i8x6   : VTVecTup<192, 6, i8, 208>; // RISCV vector tuple(min_num_elts=4, nf=6)
-def riscv_nxv4i8x7   : VTVecTup<224, 7, i8, 209>; // RISCV vector tuple(min_num_elts=4, nf=7)
-def riscv_nxv4i8x8   : VTVecTup<256, 8, i8, 210>; // RISCV vector tuple(min_num_elts=4, nf=8)
-def riscv_nxv8i8x2   : VTVecTup<128, 2, i8, 211>; // RISCV vector tuple(min_num_elts=8, nf=2)
-def riscv_nxv8i8x3   : VTVecTup<192, 3, i8, 212>; // RISCV vector tuple(min_num_elts=8, nf=3)
-def riscv_nxv8i8x4   : VTVecTup<256, 4, i8, 213>; // RISCV vector tuple(min_num_elts=8, nf=4)
-def riscv_nxv8i8x5   : VTVecTup<320, 5, i8, 214>; // RISCV vector tuple(min_num_elts=8, nf=5)
-def riscv_nxv8i8x6   : VTVecTup<384, 6, i8, 215>; // RISCV vector tuple(min_num_elts=8, nf=6)
-def riscv_nxv8i8x7   : VTVecTup<448, 7, i8, 216>; // RISCV vector tuple(min_num_elts=8, nf=7)
-def riscv_nxv8i8x8   : VTVecTup<512, 8, i8, 217>; // RISCV vector tuple(min_num_elts=8, nf=8)
-def riscv_nxv16i8x2  : VTVecTup<256, 2, i8, 218>; // RISCV vector tuple(min_num_elts=16, nf=2)
-def riscv_nxv16i8x3  : VTVecTup<384, 3, i8, 219>; // RISCV vector tuple(min_num_elts=16, nf=3)
-def riscv_nxv16i8x4  : VTVecTup<512, 4, i8, 220>; // RISCV vector tuple(min_num_elts=16, nf=4)
-def riscv_nxv32i8x2  : VTVecTup<512, 2, i8, 221>; // RISCV vector tuple(min_num_elts=32, nf=2)
-
-def x86mmx    : ValueType<64,   222>;  // X86 MMX value
-def Glue      : ValueType<0,    223>;  // Pre-RA sched glue
-def isVoid    : ValueType<0,    224>;  // Produces no value
-def untyped   : ValueType<8,    225> { // Produces an untyped value
+def riscv_nxv1i8x2   : VTVecTup<16, 2, i8, 191>;  // RISCV vector tuple(min_num_elts=1, nf=2)
+def riscv_nxv1i8x3   : VTVecTup<24, 3, i8, 192>;  // RISCV vector tuple(min_num_elts=1, nf=3)
+def riscv_nxv1i8x4   : VTVecTup<32, 4, i8, 193>;  // RISCV vector tuple(min_num_elts=1, nf=4)
+def riscv_nxv1i8x5   : VTVecTup<40, 5, i8, 194>;  // RISCV vector tuple(min_num_elts=1, nf=5)
+def riscv_nxv1i8x6   : VTVecTup<48, 6, i8, 195>;  // RISCV vector tuple(min_num_elts=1, nf=6)
+def riscv_nxv1i8x7   : VTVecTup<56, 7, i8, 196>;  // RISCV vector tuple(min_num_elts=1, nf=7)
+def riscv_nxv1i8x8   : VTVecTup<64, 8, i8, 197>;  // RISCV vector tuple(min_num_elts=1, nf=8)
+def riscv_nxv2i8x2   : VTVecTup<32, 2, i8, 198>;  // RISCV vector tuple(min_num_elts=2, nf=2)
+def riscv_nxv2i8x3   : VTVecTup<48, 3, i8, 199>;  // RISCV vector tuple(min_num_elts=2, nf=3)
+def riscv_nxv2i8x4   : VTVecTup<64, 4, i8, 200>;  // RISCV vector tuple(min_num_elts=2, nf=4)
+def riscv_nxv2i8x5   : VTVecTup<80, 5, i8, 201>;  // RISCV vector tuple(min_num_elts=2, nf=5)
+def riscv_nxv2i8x6   : VTVecTup<96, 6, i8, 202>;  // RISCV vector tuple(min_num_elts=2, nf=6)
+def riscv_nxv2i8x7   : VTVecTup<112, 7, i8, 203>; // RISCV vector tuple(min_num_elts=2, nf=7)
+def riscv_nxv2i8x8   : VTVecTup<128, 8, i8, 204>; // RISCV vector tuple(min_num_elts=2, nf=8)
+def riscv_nxv4i8x2   : VTVecTup<64, 2, i8, 205>;  // RISCV vector tuple(min_num_elts=4, nf=2)
+def riscv_nxv4i8x3   : VTVecTup<96, 3, i8, 206>;  // RISCV vector tuple(min_num_elts=4, nf=3)
+def riscv_nxv4i8x4   : VTVecTup<128, 4, i8, 207>; // RISCV vector tuple(min_num_elts=4, nf=4)
+def riscv_nxv4i8x5   : VTVecTup<160, 5, i8, 208>; // RISCV vector tuple(min_num_elts=4, nf=5)
+def riscv_nxv4i8x6   : VTVecTup<192, 6, i8, 209>; // RISCV vector tuple(min_num_elts=4, nf=6)
+def riscv_nxv4i8x7   : VTVecTup<224, 7, i8, 210>; // RISCV vector tuple(min_num_elts=4, nf=7)
+def riscv_nxv4i8x8   : VTVecTup<256, 8, i8, 211>; // RISCV vector tuple(min_num_elts=4, nf=8)
+def riscv_nxv8i8x2   : VTVecTup<128, 2, i8, 212>; // RISCV vector tuple(min_num_elts=8, nf=2)
+def riscv_nxv8i8x3   : VTVecTup<192, 3, i8, 213>; // RISCV vector tuple(min_num_elts=8, nf=3)
+def riscv_nxv8i8x4   : VTVecTup<256, 4, i8, 214>; // RISCV vector tuple(min_num_elts=8, nf=4)
+def riscv_nxv8i8x5   : VTVecTup<320, 5, i8, 215>; // RISCV vector tuple(min_num_elts=8, nf=5)
+def riscv_nxv8i8x6   : VTVecTup<384, 6, i8, 216>; // RISCV vector tuple(min_num_elts=8, nf=6)
+def riscv_nxv8i8x7   : VTVecTup<448, 7, i8, 217>; // RISCV vector tuple(min_num_elts=8, nf=7)
+def riscv_nxv8i8x8   : VTVecTup<512, 8, i8, 218>; // RISCV vector tuple(min_num_elts=8, nf=8)
+def riscv_nxv16i8x2  : VTVecTup<256, 2, i8, 219>; // RISCV vector tuple(min_num_elts=16, nf=2)
+def riscv_nxv16i8x3  : VTVecTup<384, 3, i8, 220>; // RISCV vector tuple(min_num_elts=16, nf=3)
+def riscv_nxv16i8x4  : VTVecTup<512, 4, i8, 221>; // RISCV vector tuple(min_num_elts=16, nf=4)
+def riscv_nxv32i8x2  : VTVecTup<512, 2, i8, 222>; // RISCV vector tuple(min_num_elts=32, nf=2)
+
+def x86mmx    : ValueType<64,   223>;  // X86 MMX value
+def Glue      : ValueType<0,    224>;  // Pre-RA sched glue
+def isVoid    : ValueType<0,    225>;  // Produces no value
+def untyped   : ValueType<8,    226> { // Produces an untyped value
   let LLVMName = "Untyped";
 }
-def funcref   : ValueType<0,    226>;  // WebAssembly's funcref type
-def externref : ValueType<0,    227>;  // WebAssembly's externref type
-def exnref    : ValueType<0,    228>;  // W...
[truncated]

Copy link
Collaborator

@topperc topperc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@lukel97 lukel97 merged commit 20864d2 into llvm:main Oct 6, 2024
8 checks passed
@llvm-ci
Copy link
Collaborator

llvm-ci commented Oct 6, 2024

LLVM Buildbot has detected a new failure on builder clang-debian-cpp20 running on clang-debian-cpp20 while building llvm at step 2 "checkout".

Full details are available at: https://lab.llvm.org/buildbot/#/builders/108/builds/4536

Here is the relevant piece of the build log for the reference
Step 2 (checkout) failure: update (failure)
git version 2.43.0
fatal: unable to access 'https://github.com/llvm/llvm-project.git/': Could not resolve host: github.com
fatal: unable to access 'https://github.com/llvm/llvm-project.git/': Could not resolve host: github.com

Kyvangka1610 added a commit to Kyvangka1610/llvm-project that referenced this pull request Oct 6, 2024
* commit 'FETCH_HEAD':
  [X86] combineAndLoadToBZHI - don't do an return early return if we fail to match a load
  [X86] replace-load-and-with-bzhi.ll - add commuted test cases to show failure to fold
  [X86] replace-load-and-with-bzhi.ll - cleanup check-prefixes to use X86/X64 for 32/64-bit targets
  [ExecutionEngine] Avoid repeated hash lookups (NFC) (llvm#111275)
  [ByteCode] Avoid repeated hash lookups (NFC) (llvm#111273)
  [StaticAnalyzer] Avoid repeated hash lookups (NFC) (llvm#111272)
  [CodeGen] Avoid repeated hash lookups (NFC) (llvm#111274)
  [RISCV] Simplify fixed-vector-fp.ll run lines. NFC
  [libc++][format][1/3] Adds more benchmarks. (llvm#101803)
  [X86] combineOrXorWithSETCC - avoid duplicate SDLoc/operands code. NFC.
  [X86] convertIntLogicToFPLogic - avoid duplicate SDLoc/operands code. NFC.
  [libc] Clean up some include in `libc`. (llvm#110980)
  [X86] combineBitOpWithPACK - avoid duplicate SDLoc/operands code. NFC.
  [X86] combineBitOpWithMOVMSK - avoid duplicate SDLoc/operands code. NFC.
  [X86] combineBitOpWithShift - avoid duplicate SDLoc/operands code. NFC.
  [x86] combineMul - use computeKnownBits directly to find MUL_IMM constant splat.
  [X86] combineSubABS - avoid duplicate SDLoc. NFC.
  [ValueTypes][RISCV] Add v1bf16 type (llvm#111112)
  [VPlan] Add additional FOR hoisting test.
  [clang-tidy] Create bugprone-bitwise-pointer-cast check (llvm#108083)
  [InstCombine] Canonicalize more geps with constant gep bases and constant offsets. (llvm#110033)
  [LV] Honor uniform-after-vectorization in setVectorizedCallDecision.
  [ELF] Pass Ctx & to Arch/
  [ELF] Pass Ctx & to Arch/
  [libc++] Fix a typo (llvm#111239)
  [X86] For minsize memset/memcpy, use byte or double-word accesses (llvm#87003)
  [RISCV] Unify RVBShift_ri and RVBShiftW_ri with Shift_ri and ShiftW_ri. NFC (llvm#111263)
  Revert "Reapply "[AMDGPU][GlobalISel] Fix load/store of pointer vectors, buffer.*.pN (llvm#110714)" (llvm#111059)"
  [libc] Add missing include to __support/StringUtil/tables/stdc_errors.h. (llvm#111271)
  [libc] remove errno.h includes (llvm#110934)
  [NFC][rtsan] Update docs to include [[clang::blocking]] (llvm#111249)
  [RISCV] Give ZEXT_H_RV32 and ZEXT_H_RV64 R-type format to match PACK. NFC
  [mlir][SPIRV] Fix build (2) (llvm#111265)
  [mlir][SPIRV] Fix build error (llvm#111264)
  [mlir][NFC] Mark type converter in `populate...` functions as `const` (llvm#111250)
  [Basic] Avoid repeated hash lookups (NFC) (llvm#111228)
  [RISCV] Use THShift_ri class instead of RVBShift_ri for TH_TST instruction. NFC
  [VPlan] Only generate first lane for VPPredInstPHI if no others used.
  [ELF] Don't call getPPC64TargetInfo outside Driver. NFC
  [GISel] Don't preserve NSW flag when converting G_MUL of INT_MIN to G_SHL. (llvm#111230)
  [APInt] Slightly simplify APInt::ashrSlowCase. NFC (llvm#111220)
  [Sema] Avoid repeated hash lookups (NFC) (llvm#111227)
  [Affine] Avoid repeated hash lookups (NFC) (llvm#111226)
  [Driver] Avoid repeated hash lookups (NFC) (llvm#111225)
  [clang][test] Remove a broken bytecode test
  [ELF] Pass Ctx &
  [ELF] Pass Ctx & to Relocations

Signed-off-by: kyvangka1610 <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants