-
Notifications
You must be signed in to change notification settings - Fork 13.6k
[VectorCombine] Fold "(or (zext (bitcast X)), (shl (zext (bitcast Y)), C))" -> "(bitcast (concat X, Y))" MOVMSK bool mask style patterns #119559
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@llvm/pr-subscribers-llvm-transforms Author: Simon Pilgrim (RKSimon) ChangesMask/Bool vectors are often bitcast to/from scalar integers, in particular when concatenating mask results, often this is due to the difficulties of working with vector of bools on C/C++. On x86 this typically involves the MOVMSK/KMOV instructions. To concatenate bool masks, these are typically cast to scalars, which are then zero-extended, shifted and OR'd together. This patch attempts to match these scalar concatenation patterns and convert them to vector shuffles instead. This in turn often assists with further vector combines, depending on the cost model. Fixes #111431 Patch is 31.88 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/119559.diff 2 Files Affected:
diff --git a/llvm/lib/Transforms/Vectorize/VectorCombine.cpp b/llvm/lib/Transforms/Vectorize/VectorCombine.cpp
index 033900de55278c..e9cfdf17448dd9 100644
--- a/llvm/lib/Transforms/Vectorize/VectorCombine.cpp
+++ b/llvm/lib/Transforms/Vectorize/VectorCombine.cpp
@@ -115,6 +115,7 @@ class VectorCombine {
bool foldExtractedCmps(Instruction &I);
bool foldSingleElementStore(Instruction &I);
bool scalarizeLoadExtract(Instruction &I);
+ bool foldConcatOfBoolMasks(Instruction &I);
bool foldPermuteOfBinops(Instruction &I);
bool foldShuffleOfBinops(Instruction &I);
bool foldShuffleOfCastops(Instruction &I);
@@ -1423,6 +1424,112 @@ bool VectorCombine::scalarizeLoadExtract(Instruction &I) {
return true;
}
+/// Try to fold "(or (zext (bitcast X)), (shl (zext (bitcast Y)), C))"
+/// to "(bitcast (concat X, Y))"
+/// where X/Y are bitcasted from i1 mask vectors.
+bool VectorCombine::foldConcatOfBoolMasks(Instruction &I) {
+ Type *Ty = I.getType();
+ if (!Ty->isIntegerTy())
+ return false;
+
+ // TODO: Add big endian test coverage
+ if (DL->isBigEndian())
+ return false;
+
+ // Restrict to disjoint cases so the mask vectors aren't overlapping.
+ Instruction *X, *Y;
+ if (!match(&I, m_DisjointOr(m_Instruction(X), m_Instruction(Y))))
+ return false;
+
+ // Allow both sources to contain shl, to handle more generic pattern:
+ // "(or (shl (zext (bitcast X)), C1), (shl (zext (bitcast Y)), C2))"
+ Value *SrcX;
+ uint64_t ShAmtX = 0;
+ if (!match(X, m_OneUse(m_ZExt(m_OneUse(m_BitCast(m_Value(SrcX)))))) &&
+ !match(X, m_OneUse(
+ m_Shl(m_OneUse(m_ZExt(m_OneUse(m_BitCast(m_Value(SrcX))))),
+ m_ConstantInt(ShAmtX)))))
+ return false;
+
+ Value *SrcY;
+ uint64_t ShAmtY = 0;
+ if (!match(Y, m_OneUse(m_ZExt(m_OneUse(m_BitCast(m_Value(SrcY)))))) &&
+ !match(Y, m_OneUse(
+ m_Shl(m_OneUse(m_ZExt(m_OneUse(m_BitCast(m_Value(SrcY))))),
+ m_ConstantInt(ShAmtY)))))
+ return false;
+
+ // Canonicalize larger shift to the RHS.
+ if (ShAmtX > ShAmtY) {
+ std::swap(X, Y);
+ std::swap(SrcX, SrcY);
+ std::swap(ShAmtX, ShAmtY);
+ }
+
+ // Ensure both sources are matching vXi1 bool mask types, and that the shift
+ // difference is the mask width so they can be easily concatenated together.
+ uint64_t ShAmtDiff = ShAmtY - ShAmtX;
+ unsigned NumSHL = (ShAmtX > 0) + (ShAmtY > 0);
+ unsigned BitWidth = Ty->getPrimitiveSizeInBits();
+ auto *MaskTy = dyn_cast<FixedVectorType>(SrcX->getType());
+ if (!MaskTy || SrcX->getType() != SrcY->getType() ||
+ !MaskTy->getElementType()->isIntegerTy(1) ||
+ MaskTy->getNumElements() != ShAmtDiff ||
+ MaskTy->getNumElements() > (BitWidth / 2))
+ return false;
+
+ auto *ConcatTy = FixedVectorType::getDoubleElementsVectorType(MaskTy);
+ auto *ConcatIntTy = Type::getIntNTy(Ty->getContext(), ConcatTy->getNumElements());
+ auto *MaskIntTy = Type::getIntNTy(Ty->getContext(), ShAmtDiff);
+
+ SmallVector<int, 32> ConcatMask(ConcatTy->getNumElements());
+ std::iota(ConcatMask.begin(), ConcatMask.end(), 0);
+
+ // TODO: Is it worth supporting multi use cases?
+ InstructionCost OldCost = 0;
+ OldCost += TTI.getArithmeticInstrCost(Instruction::Or, Ty, CostKind);
+ OldCost +=
+ NumSHL * TTI.getArithmeticInstrCost(Instruction::Shl, Ty, CostKind);
+ OldCost += 2 * TTI.getCastInstrCost(Instruction::ZExt, Ty, MaskIntTy,
+ TTI::CastContextHint::None, CostKind);
+ OldCost += 2 * TTI.getCastInstrCost(Instruction::BitCast, MaskIntTy, MaskTy,
+ TTI::CastContextHint::None, CostKind);
+
+ InstructionCost NewCost = 0;
+ NewCost += TTI.getShuffleCost(TargetTransformInfo::SK_PermuteTwoSrc, MaskTy,
+ ConcatMask, CostKind);
+ NewCost += TTI.getCastInstrCost(Instruction::BitCast, ConcatIntTy, ConcatTy,
+ TTI::CastContextHint::None, CostKind);
+ if (Ty != ConcatIntTy)
+ NewCost += TTI.getCastInstrCost(Instruction::ZExt, Ty, ConcatIntTy,
+ TTI::CastContextHint::None, CostKind);
+ if (ShAmtX > 0)
+ NewCost += TTI.getArithmeticInstrCost(Instruction::Shl, Ty, CostKind);
+
+ if (NewCost > OldCost)
+ return false;
+
+ // Build bool mask concatenation, bitcast back to scalar integer, and perform
+ // any residual zero-extension or shifting.
+ Value *Concat = Builder.CreateShuffleVector(SrcX, SrcY, ConcatMask);
+ Worklist.pushValue(Concat);
+
+ Value *Result = Builder.CreateBitCast(Concat, ConcatIntTy);
+
+ if (Ty != ConcatIntTy) {
+ Worklist.pushValue(Result);
+ Result = Builder.CreateZExt(Result, Ty);
+ }
+
+ if (ShAmtX > 0) {
+ Worklist.pushValue(Result);
+ Result = Builder.CreateShl(Result, ShAmtX);
+ }
+
+ replaceValue(I, *Result);
+ return true;
+}
+
/// Try to convert "shuffle (binop (shuffle, shuffle)), undef"
/// --> "binop (shuffle), (shuffle)".
bool VectorCombine::foldPermuteOfBinops(Instruction &I) {
@@ -2902,6 +3009,9 @@ bool VectorCombine::run() {
if (TryEarlyFoldsOnly)
return;
+ if (I.getType()->isIntegerTy())
+ MadeChange |= foldConcatOfBoolMasks(I);
+
// Otherwise, try folds that improve codegen but may interfere with
// early IR canonicalizations.
// The type checking is for run-time efficiency. We can avoid wasting time
diff --git a/llvm/test/Transforms/PhaseOrdering/X86/concat-boolmasks.ll b/llvm/test/Transforms/PhaseOrdering/X86/concat-boolmasks.ll
index 07bfbffa9518fa..d4c3de664c80af 100644
--- a/llvm/test/Transforms/PhaseOrdering/X86/concat-boolmasks.ll
+++ b/llvm/test/Transforms/PhaseOrdering/X86/concat-boolmasks.ll
@@ -1,20 +1,22 @@
; NOTE: Assertions have been autogenerated by utils/update_test_checks.py
-; RUN: opt < %s -O3 -S -mtriple=x86_64-- -mcpu=x86-64 | FileCheck %s
-; RUN: opt < %s -O3 -S -mtriple=x86_64-- -mcpu=x86-64-v2 | FileCheck %s
-; RUN: opt < %s -O3 -S -mtriple=x86_64-- -mcpu=x86-64-v3 | FileCheck %s
-; RUN: opt < %s -O3 -S -mtriple=x86_64-- -mcpu=x86-64-v4 | FileCheck %s
+; RUN: opt < %s -O3 -S -mtriple=x86_64-- -mcpu=x86-64 | FileCheck %s --check-prefixes=CHECK,SSE
+; RUN: opt < %s -O3 -S -mtriple=x86_64-- -mcpu=x86-64-v2 | FileCheck %s --check-prefixes=CHECK,SSE
+; RUN: opt < %s -O3 -S -mtriple=x86_64-- -mcpu=x86-64-v3 | FileCheck %s --check-prefixes=CHECK,AVX,AVX2
+; RUN: opt < %s -O3 -S -mtriple=x86_64-- -mcpu=x86-64-v4 | FileCheck %s --check-prefixes=CHECK,AVX,AVX512
define i32 @movmsk_i32_v32i8_v16i8(<16 x i8> %v0, <16 x i8> %v1) {
-; CHECK-LABEL: @movmsk_i32_v32i8_v16i8(
-; CHECK-NEXT: [[C0:%.*]] = icmp slt <16 x i8> [[V0:%.*]], zeroinitializer
-; CHECK-NEXT: [[C1:%.*]] = icmp slt <16 x i8> [[V1:%.*]], zeroinitializer
-; CHECK-NEXT: [[B0:%.*]] = bitcast <16 x i1> [[C0]] to i16
-; CHECK-NEXT: [[B1:%.*]] = bitcast <16 x i1> [[C1]] to i16
-; CHECK-NEXT: [[Z0:%.*]] = zext i16 [[B0]] to i32
-; CHECK-NEXT: [[Z1:%.*]] = zext i16 [[B1]] to i32
-; CHECK-NEXT: [[S0:%.*]] = shl nuw i32 [[Z0]], 16
-; CHECK-NEXT: [[OR:%.*]] = or disjoint i32 [[S0]], [[Z1]]
-; CHECK-NEXT: ret i32 [[OR]]
+; SSE-LABEL: @movmsk_i32_v32i8_v16i8(
+; SSE-NEXT: [[TMP1:%.*]] = shufflevector <16 x i8> [[V1:%.*]], <16 x i8> [[V0:%.*]], <32 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 16, i32 17, i32 18, i32 19, i32 20, i32 21, i32 22, i32 23, i32 24, i32 25, i32 26, i32 27, i32 28, i32 29, i32 30, i32 31>
+; SSE-NEXT: [[TMP2:%.*]] = icmp slt <32 x i8> [[TMP1]], zeroinitializer
+; SSE-NEXT: [[OR:%.*]] = bitcast <32 x i1> [[TMP2]] to i32
+; SSE-NEXT: ret i32 [[OR]]
+;
+; AVX-LABEL: @movmsk_i32_v32i8_v16i8(
+; AVX-NEXT: [[C0:%.*]] = icmp slt <16 x i8> [[V0:%.*]], zeroinitializer
+; AVX-NEXT: [[C1:%.*]] = icmp slt <16 x i8> [[V1:%.*]], zeroinitializer
+; AVX-NEXT: [[TMP1:%.*]] = shufflevector <16 x i1> [[C1]], <16 x i1> [[C0]], <32 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 16, i32 17, i32 18, i32 19, i32 20, i32 21, i32 22, i32 23, i32 24, i32 25, i32 26, i32 27, i32 28, i32 29, i32 30, i32 31>
+; AVX-NEXT: [[OR:%.*]] = bitcast <32 x i1> [[TMP1]] to i32
+; AVX-NEXT: ret i32 [[OR]]
;
%c0 = icmp slt <16 x i8> %v0, zeroinitializer
%c1 = icmp slt <16 x i8> %v1, zeroinitializer
@@ -28,16 +30,20 @@ define i32 @movmsk_i32_v32i8_v16i8(<16 x i8> %v0, <16 x i8> %v1) {
}
define i32 @movmsk_i32_v8i32_v4i32(<4 x i32> %v0, <4 x i32> %v1) {
-; CHECK-LABEL: @movmsk_i32_v8i32_v4i32(
-; CHECK-NEXT: [[C0:%.*]] = icmp slt <4 x i32> [[V0:%.*]], zeroinitializer
-; CHECK-NEXT: [[C1:%.*]] = icmp slt <4 x i32> [[V1:%.*]], zeroinitializer
-; CHECK-NEXT: [[B0:%.*]] = bitcast <4 x i1> [[C0]] to i4
-; CHECK-NEXT: [[B1:%.*]] = bitcast <4 x i1> [[C1]] to i4
-; CHECK-NEXT: [[Z0:%.*]] = zext i4 [[B0]] to i32
-; CHECK-NEXT: [[Z1:%.*]] = zext i4 [[B1]] to i32
-; CHECK-NEXT: [[S0:%.*]] = shl nuw nsw i32 [[Z0]], 4
-; CHECK-NEXT: [[OR:%.*]] = or disjoint i32 [[S0]], [[Z1]]
-; CHECK-NEXT: ret i32 [[OR]]
+; SSE-LABEL: @movmsk_i32_v8i32_v4i32(
+; SSE-NEXT: [[TMP1:%.*]] = shufflevector <4 x i32> [[V1:%.*]], <4 x i32> [[V0:%.*]], <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7>
+; SSE-NEXT: [[TMP2:%.*]] = icmp slt <8 x i32> [[TMP1]], zeroinitializer
+; SSE-NEXT: [[TMP3:%.*]] = bitcast <8 x i1> [[TMP2]] to i8
+; SSE-NEXT: [[OR:%.*]] = zext i8 [[TMP3]] to i32
+; SSE-NEXT: ret i32 [[OR]]
+;
+; AVX-LABEL: @movmsk_i32_v8i32_v4i32(
+; AVX-NEXT: [[C0:%.*]] = icmp slt <4 x i32> [[V0:%.*]], zeroinitializer
+; AVX-NEXT: [[C1:%.*]] = icmp slt <4 x i32> [[V1:%.*]], zeroinitializer
+; AVX-NEXT: [[TMP1:%.*]] = shufflevector <4 x i1> [[C1]], <4 x i1> [[C0]], <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7>
+; AVX-NEXT: [[TMP2:%.*]] = bitcast <8 x i1> [[TMP1]] to i8
+; AVX-NEXT: [[OR:%.*]] = zext i8 [[TMP2]] to i32
+; AVX-NEXT: ret i32 [[OR]]
;
%c0 = icmp slt <4 x i32> %v0, zeroinitializer
%c1 = icmp slt <4 x i32> %v1, zeroinitializer
@@ -51,16 +57,20 @@ define i32 @movmsk_i32_v8i32_v4i32(<4 x i32> %v0, <4 x i32> %v1) {
}
define i64 @movmsk_i64_v32i8_v16i8(<16 x i8> %v0, <16 x i8> %v1) {
-; CHECK-LABEL: @movmsk_i64_v32i8_v16i8(
-; CHECK-NEXT: [[C0:%.*]] = icmp slt <16 x i8> [[V0:%.*]], zeroinitializer
-; CHECK-NEXT: [[C1:%.*]] = icmp slt <16 x i8> [[V1:%.*]], zeroinitializer
-; CHECK-NEXT: [[B0:%.*]] = bitcast <16 x i1> [[C0]] to i16
-; CHECK-NEXT: [[B1:%.*]] = bitcast <16 x i1> [[C1]] to i16
-; CHECK-NEXT: [[Z0:%.*]] = zext i16 [[B0]] to i64
-; CHECK-NEXT: [[Z1:%.*]] = zext i16 [[B1]] to i64
-; CHECK-NEXT: [[S0:%.*]] = shl nuw nsw i64 [[Z0]], 16
-; CHECK-NEXT: [[OR:%.*]] = or disjoint i64 [[S0]], [[Z1]]
-; CHECK-NEXT: ret i64 [[OR]]
+; SSE-LABEL: @movmsk_i64_v32i8_v16i8(
+; SSE-NEXT: [[TMP1:%.*]] = shufflevector <16 x i8> [[V1:%.*]], <16 x i8> [[V0:%.*]], <32 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 16, i32 17, i32 18, i32 19, i32 20, i32 21, i32 22, i32 23, i32 24, i32 25, i32 26, i32 27, i32 28, i32 29, i32 30, i32 31>
+; SSE-NEXT: [[TMP2:%.*]] = icmp slt <32 x i8> [[TMP1]], zeroinitializer
+; SSE-NEXT: [[TMP3:%.*]] = bitcast <32 x i1> [[TMP2]] to i32
+; SSE-NEXT: [[OR:%.*]] = zext i32 [[TMP3]] to i64
+; SSE-NEXT: ret i64 [[OR]]
+;
+; AVX-LABEL: @movmsk_i64_v32i8_v16i8(
+; AVX-NEXT: [[C0:%.*]] = icmp slt <16 x i8> [[V0:%.*]], zeroinitializer
+; AVX-NEXT: [[C1:%.*]] = icmp slt <16 x i8> [[V1:%.*]], zeroinitializer
+; AVX-NEXT: [[TMP1:%.*]] = shufflevector <16 x i1> [[C1]], <16 x i1> [[C0]], <32 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 16, i32 17, i32 18, i32 19, i32 20, i32 21, i32 22, i32 23, i32 24, i32 25, i32 26, i32 27, i32 28, i32 29, i32 30, i32 31>
+; AVX-NEXT: [[TMP2:%.*]] = bitcast <32 x i1> [[TMP1]] to i32
+; AVX-NEXT: [[OR:%.*]] = zext i32 [[TMP2]] to i64
+; AVX-NEXT: ret i64 [[OR]]
;
%c0 = icmp slt <16 x i8> %v0, zeroinitializer
%c1 = icmp slt <16 x i8> %v1, zeroinitializer
@@ -74,16 +84,20 @@ define i64 @movmsk_i64_v32i8_v16i8(<16 x i8> %v0, <16 x i8> %v1) {
}
define i64 @movmsk_i64_v8i32_v4i32(<4 x i32> %v0, <4 x i32> %v1) {
-; CHECK-LABEL: @movmsk_i64_v8i32_v4i32(
-; CHECK-NEXT: [[C0:%.*]] = icmp slt <4 x i32> [[V0:%.*]], zeroinitializer
-; CHECK-NEXT: [[C1:%.*]] = icmp slt <4 x i32> [[V1:%.*]], zeroinitializer
-; CHECK-NEXT: [[B0:%.*]] = bitcast <4 x i1> [[C0]] to i4
-; CHECK-NEXT: [[B1:%.*]] = bitcast <4 x i1> [[C1]] to i4
-; CHECK-NEXT: [[Z0:%.*]] = zext i4 [[B0]] to i64
-; CHECK-NEXT: [[Z1:%.*]] = zext i4 [[B1]] to i64
-; CHECK-NEXT: [[S0:%.*]] = shl nuw nsw i64 [[Z0]], 4
-; CHECK-NEXT: [[OR:%.*]] = or disjoint i64 [[S0]], [[Z1]]
-; CHECK-NEXT: ret i64 [[OR]]
+; SSE-LABEL: @movmsk_i64_v8i32_v4i32(
+; SSE-NEXT: [[TMP1:%.*]] = shufflevector <4 x i32> [[V1:%.*]], <4 x i32> [[V0:%.*]], <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7>
+; SSE-NEXT: [[TMP2:%.*]] = icmp slt <8 x i32> [[TMP1]], zeroinitializer
+; SSE-NEXT: [[TMP3:%.*]] = bitcast <8 x i1> [[TMP2]] to i8
+; SSE-NEXT: [[OR:%.*]] = zext i8 [[TMP3]] to i64
+; SSE-NEXT: ret i64 [[OR]]
+;
+; AVX-LABEL: @movmsk_i64_v8i32_v4i32(
+; AVX-NEXT: [[C0:%.*]] = icmp slt <4 x i32> [[V0:%.*]], zeroinitializer
+; AVX-NEXT: [[C1:%.*]] = icmp slt <4 x i32> [[V1:%.*]], zeroinitializer
+; AVX-NEXT: [[TMP1:%.*]] = shufflevector <4 x i1> [[C1]], <4 x i1> [[C0]], <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7>
+; AVX-NEXT: [[TMP2:%.*]] = bitcast <8 x i1> [[TMP1]] to i8
+; AVX-NEXT: [[OR:%.*]] = zext i8 [[TMP2]] to i64
+; AVX-NEXT: ret i64 [[OR]]
;
%c0 = icmp slt <4 x i32> %v0, zeroinitializer
%c1 = icmp slt <4 x i32> %v1, zeroinitializer
@@ -97,26 +111,24 @@ define i64 @movmsk_i64_v8i32_v4i32(<4 x i32> %v0, <4 x i32> %v1) {
}
define i64 @movmsk_i64_v64i8_v16i8(<16 x i8> %v0, <16 x i8> %v1, <16 x i8> %v2, <16 x i8> %v3) {
-; CHECK-LABEL: @movmsk_i64_v64i8_v16i8(
-; CHECK-NEXT: [[C0:%.*]] = icmp slt <16 x i8> [[V0:%.*]], zeroinitializer
-; CHECK-NEXT: [[C1:%.*]] = icmp slt <16 x i8> [[V1:%.*]], zeroinitializer
-; CHECK-NEXT: [[C2:%.*]] = icmp slt <16 x i8> [[V2:%.*]], zeroinitializer
-; CHECK-NEXT: [[C3:%.*]] = icmp slt <16 x i8> [[V3:%.*]], zeroinitializer
-; CHECK-NEXT: [[B0:%.*]] = bitcast <16 x i1> [[C0]] to i16
-; CHECK-NEXT: [[B1:%.*]] = bitcast <16 x i1> [[C1]] to i16
-; CHECK-NEXT: [[B2:%.*]] = bitcast <16 x i1> [[C2]] to i16
-; CHECK-NEXT: [[B3:%.*]] = bitcast <16 x i1> [[C3]] to i16
-; CHECK-NEXT: [[Z0:%.*]] = zext i16 [[B0]] to i64
-; CHECK-NEXT: [[Z1:%.*]] = zext i16 [[B1]] to i64
-; CHECK-NEXT: [[Z2:%.*]] = zext i16 [[B2]] to i64
-; CHECK-NEXT: [[Z3:%.*]] = zext i16 [[B3]] to i64
-; CHECK-NEXT: [[S0:%.*]] = shl nuw i64 [[Z0]], 48
-; CHECK-NEXT: [[S1:%.*]] = shl nuw nsw i64 [[Z1]], 32
-; CHECK-NEXT: [[S2:%.*]] = shl nuw nsw i64 [[Z2]], 16
-; CHECK-NEXT: [[OR0:%.*]] = or disjoint i64 [[S1]], [[S0]]
-; CHECK-NEXT: [[OR1:%.*]] = or disjoint i64 [[S2]], [[Z3]]
-; CHECK-NEXT: [[OR:%.*]] = or disjoint i64 [[OR1]], [[OR0]]
-; CHECK-NEXT: ret i64 [[OR]]
+; SSE-LABEL: @movmsk_i64_v64i8_v16i8(
+; SSE-NEXT: [[TMP1:%.*]] = shufflevector <16 x i8> [[V3:%.*]], <16 x i8> [[V2:%.*]], <32 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 16, i32 17, i32 18, i32 19, i32 20, i32 21, i32 22, i32 23, i32 24, i32 25, i32 26, i32 27, i32 28, i32 29, i32 30, i32 31>
+; SSE-NEXT: [[TMP2:%.*]] = shufflevector <16 x i8> [[V1:%.*]], <16 x i8> [[V0:%.*]], <32 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 16, i32 17, i32 18, i32 19, i32 20, i32 21, i32 22, i32 23, i32 24, i32 25, i32 26, i32 27, i32 28, i32 29, i32 30, i32 31>
+; SSE-NEXT: [[TMP3:%.*]] = shufflevector <32 x i8> [[TMP1]], <32 x i8> [[TMP2]], <64 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 16, i32 17, i32 18, i32 19, i32 20, i32 21, i32 22, i32 23, i32 24, i32 25, i32 26, i32 27, i32 28, i32 29, i32 30, i32 31, i32 32, i32 33, i32 34, i32 35, i32 36, i32 37, i32 38, i32 39, i32 40, i32 41, i32 42, i32 43, i32 44, i32 45, i32 46, i32 47, i32 48, i32 49, i32 50, i32 51, i32 52, i32 53, i32 54, i32 55, i32 56, i32 57, i32 58, i32 59, i32 60, i32 61, i32 62, i32 63>
+; SSE-NEXT: [[TMP4:%.*]] = icmp slt <64 x i8> [[TMP3]], zeroinitializer
+; SSE-NEXT: [[OR:%.*]] = bitcast <64 x i1> [[TMP4]] to i64
+; SSE-NEXT: ret i64 [[OR]]
+;
+; AVX-LABEL: @movmsk_i64_v64i8_v16i8(
+; AVX-NEXT: [[C0:%.*]] = icmp slt <16 x i8> [[V0:%.*]], zeroinitializer
+; AVX-NEXT: [[C1:%.*]] = icmp slt <16 x i8> [[V1:%.*]], zeroinitializer
+; AVX-NEXT: [[C2:%.*]] = icmp slt <16 x i8> [[V2:%.*]], zeroinitializer
+; AVX-NEXT: [[C3:%.*]] = icmp slt <16 x i8> [[V3:%.*]], zeroinitializer
+; AVX-NEXT: [[TMP2:%.*]] = shufflevector <16 x i1> [[C1]], <16 x i1> [[C0]], <32 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 16, i32 17, i32 18, i32 19, i32 20, i32 21, i32 22, i32 23, i32 24, i32 25, i32 26, i32 27, i32 28, i32 29, i32 30, i32 31>
+; AVX-NEXT: [[TMP1:%.*]] = shufflevector <16 x i1> [[C3]], <16 x i1> [[C2]], <32 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 16, i32 17, i32 18, i32 19, i32 20, i32 21, i32 22, i32 23, i32 24, i32 25, i32 26, i32 27, i32 28, i32 29, i32 30, i32 31>
+; AVX-NEXT: [[TMP3:%.*]] = shufflevector <32 x i1> [[TMP1]], <32 x i1> [[TMP2]], <64 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 16, i32 17, i32 18, i32 19, i32 20, i32 21, i32 22, i32 23, i32 24, i32 25, i32 26, i32 27, i32 28, i32 29, i32 30, i32 31, i32 32, i32 33, i32 34, i32 35, i32 36, i32 37, i32 38, i32 39, i32 40, i32 41, i32 42, i32 43, i32 44, i32 45, i32 46, i32 47, i32 48, i32 49, i32 50, i32 51, i32 52, i32 53, i32 54, i32 55, i32 56, i32 57, i32 58, i32 59, i32 60, i32 61, i32 62, i32 63>
+; AVX-NEXT: [[OR:%.*]] = bitcast <64 x i1> [[TMP3]] to i64
+; AVX-NEXT: ret i64 [[OR]]
;
%c0 = icmp slt <16 x i8> %v0, zeroinitializer
%c1 = icmp slt <16 x i8> %v1, zeroinitializer
@@ -140,26 +152,26 @@ define i64 @movmsk_i64_v64i8_v16i8(<16 x i8> %v0, <16 x i8> %v1, <16 x i8> %v2,
}
define i64 @movmsk_i64_v32i32_v4i32(<4 x i32> %v0, <4 x i32> %v1, <4 x i32> %v2, <4 x i32> %v3) {
-; CHECK-LABEL: @movmsk_i64_v32i32_v4i32(
-; CHECK-NEXT: [[C0:%.*]] = icmp slt <4 x i32> [[V0:%.*]], zeroinitializer
-; CHECK-NEXT: [[C1:%.*]] = icmp slt <4 x i32> [[V1:%.*]], zeroinitializer
-; CHECK-NEXT: [[C2:%.*]] = icmp slt <4 x i32> [[V2:%.*]], zeroinitializer
-; CHECK-NEXT: [[C3:%.*]] = icmp slt <4 x i32> [[V3:%.*]], zeroinitializer
-; CHECK-NEXT: [[B0:%.*]] = bitcast <4 x i1> [[C0]] to i4
-; CHECK-NEXT: [[B1:%.*]] = bitcast <4 x i1> [[C1]] to i4
-; CHECK-NEXT: [[B2:%.*]] = bitcast <4 x i1> [[C2]] to i4
-; CHECK-NEXT: [[B3:%.*]] = bitcast <4 x i1> [[C3]] to i4
-; CHECK-NEXT: [[Z0:%.*]] = zext i4 [[B0]] to i64
-; CHECK-NEXT: [[Z1:%.*]] = zext i4 [[B1]] to i64
-; CHECK-NEXT: [[Z2:%.*]] = zext i4 [[B2]] to i64
-; CHECK-NEXT: [[Z3:%.*]] = zext i4 [[B3]] to i64
-; CHECK-NEXT: [[S0:%.*]] = shl nuw nsw i64 [[Z0]], 12
-; CHECK-NEXT: [[S1:%.*]] = shl nuw nsw i64 [[Z1]], 8
-; CHECK-NEXT: [[S2:%.*]] = shl nuw nsw i64 [[Z2]], 4
-; CHECK-NEXT: [[OR0:%.*]] = or disjoint i64 [[S1]], [[S0]]
-; CHECK-NEXT: [[OR1:%.*]] = or disjoint i64 [[S2]], [[Z3]]
-; CHECK-NEXT: [[OR:%.*]] = or disjoint i64 [[OR1]], [[OR0]]
-; CHECK-NEXT: ret i64 [[OR]]
+; SSE-LABEL: @movmsk_i64_v32i32_v4i32(
+; SSE-NEXT: [[TMP1:%.*]] = shufflevector <4 x i32> [[V3:%.*]], <4 x i32> [[V2:%.*]], <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7>
+; SSE-NEXT: [[TMP2:%.*]] = shuf...
[truncated]
|
@llvm/pr-subscribers-vectorizers Author: Simon Pilgrim (RKSimon) ChangesMask/Bool vectors are often bitcast to/from scalar integers, in particular when concatenating mask results, often this is due to the difficulties of working with vector of bools on C/C++. On x86 this typically involves the MOVMSK/KMOV instructions. To concatenate bool masks, these are typically cast to scalars, which are then zero-extended, shifted and OR'd together. This patch attempts to match these scalar concatenation patterns and convert them to vector shuffles instead. This in turn often assists with further vector combines, depending on the cost model. Fixes #111431 Patch is 31.88 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/119559.diff 2 Files Affected:
diff --git a/llvm/lib/Transforms/Vectorize/VectorCombine.cpp b/llvm/lib/Transforms/Vectorize/VectorCombine.cpp
index 033900de55278c..e9cfdf17448dd9 100644
--- a/llvm/lib/Transforms/Vectorize/VectorCombine.cpp
+++ b/llvm/lib/Transforms/Vectorize/VectorCombine.cpp
@@ -115,6 +115,7 @@ class VectorCombine {
bool foldExtractedCmps(Instruction &I);
bool foldSingleElementStore(Instruction &I);
bool scalarizeLoadExtract(Instruction &I);
+ bool foldConcatOfBoolMasks(Instruction &I);
bool foldPermuteOfBinops(Instruction &I);
bool foldShuffleOfBinops(Instruction &I);
bool foldShuffleOfCastops(Instruction &I);
@@ -1423,6 +1424,112 @@ bool VectorCombine::scalarizeLoadExtract(Instruction &I) {
return true;
}
+/// Try to fold "(or (zext (bitcast X)), (shl (zext (bitcast Y)), C))"
+/// to "(bitcast (concat X, Y))"
+/// where X/Y are bitcasted from i1 mask vectors.
+bool VectorCombine::foldConcatOfBoolMasks(Instruction &I) {
+ Type *Ty = I.getType();
+ if (!Ty->isIntegerTy())
+ return false;
+
+ // TODO: Add big endian test coverage
+ if (DL->isBigEndian())
+ return false;
+
+ // Restrict to disjoint cases so the mask vectors aren't overlapping.
+ Instruction *X, *Y;
+ if (!match(&I, m_DisjointOr(m_Instruction(X), m_Instruction(Y))))
+ return false;
+
+ // Allow both sources to contain shl, to handle more generic pattern:
+ // "(or (shl (zext (bitcast X)), C1), (shl (zext (bitcast Y)), C2))"
+ Value *SrcX;
+ uint64_t ShAmtX = 0;
+ if (!match(X, m_OneUse(m_ZExt(m_OneUse(m_BitCast(m_Value(SrcX)))))) &&
+ !match(X, m_OneUse(
+ m_Shl(m_OneUse(m_ZExt(m_OneUse(m_BitCast(m_Value(SrcX))))),
+ m_ConstantInt(ShAmtX)))))
+ return false;
+
+ Value *SrcY;
+ uint64_t ShAmtY = 0;
+ if (!match(Y, m_OneUse(m_ZExt(m_OneUse(m_BitCast(m_Value(SrcY)))))) &&
+ !match(Y, m_OneUse(
+ m_Shl(m_OneUse(m_ZExt(m_OneUse(m_BitCast(m_Value(SrcY))))),
+ m_ConstantInt(ShAmtY)))))
+ return false;
+
+ // Canonicalize larger shift to the RHS.
+ if (ShAmtX > ShAmtY) {
+ std::swap(X, Y);
+ std::swap(SrcX, SrcY);
+ std::swap(ShAmtX, ShAmtY);
+ }
+
+ // Ensure both sources are matching vXi1 bool mask types, and that the shift
+ // difference is the mask width so they can be easily concatenated together.
+ uint64_t ShAmtDiff = ShAmtY - ShAmtX;
+ unsigned NumSHL = (ShAmtX > 0) + (ShAmtY > 0);
+ unsigned BitWidth = Ty->getPrimitiveSizeInBits();
+ auto *MaskTy = dyn_cast<FixedVectorType>(SrcX->getType());
+ if (!MaskTy || SrcX->getType() != SrcY->getType() ||
+ !MaskTy->getElementType()->isIntegerTy(1) ||
+ MaskTy->getNumElements() != ShAmtDiff ||
+ MaskTy->getNumElements() > (BitWidth / 2))
+ return false;
+
+ auto *ConcatTy = FixedVectorType::getDoubleElementsVectorType(MaskTy);
+ auto *ConcatIntTy = Type::getIntNTy(Ty->getContext(), ConcatTy->getNumElements());
+ auto *MaskIntTy = Type::getIntNTy(Ty->getContext(), ShAmtDiff);
+
+ SmallVector<int, 32> ConcatMask(ConcatTy->getNumElements());
+ std::iota(ConcatMask.begin(), ConcatMask.end(), 0);
+
+ // TODO: Is it worth supporting multi use cases?
+ InstructionCost OldCost = 0;
+ OldCost += TTI.getArithmeticInstrCost(Instruction::Or, Ty, CostKind);
+ OldCost +=
+ NumSHL * TTI.getArithmeticInstrCost(Instruction::Shl, Ty, CostKind);
+ OldCost += 2 * TTI.getCastInstrCost(Instruction::ZExt, Ty, MaskIntTy,
+ TTI::CastContextHint::None, CostKind);
+ OldCost += 2 * TTI.getCastInstrCost(Instruction::BitCast, MaskIntTy, MaskTy,
+ TTI::CastContextHint::None, CostKind);
+
+ InstructionCost NewCost = 0;
+ NewCost += TTI.getShuffleCost(TargetTransformInfo::SK_PermuteTwoSrc, MaskTy,
+ ConcatMask, CostKind);
+ NewCost += TTI.getCastInstrCost(Instruction::BitCast, ConcatIntTy, ConcatTy,
+ TTI::CastContextHint::None, CostKind);
+ if (Ty != ConcatIntTy)
+ NewCost += TTI.getCastInstrCost(Instruction::ZExt, Ty, ConcatIntTy,
+ TTI::CastContextHint::None, CostKind);
+ if (ShAmtX > 0)
+ NewCost += TTI.getArithmeticInstrCost(Instruction::Shl, Ty, CostKind);
+
+ if (NewCost > OldCost)
+ return false;
+
+ // Build bool mask concatenation, bitcast back to scalar integer, and perform
+ // any residual zero-extension or shifting.
+ Value *Concat = Builder.CreateShuffleVector(SrcX, SrcY, ConcatMask);
+ Worklist.pushValue(Concat);
+
+ Value *Result = Builder.CreateBitCast(Concat, ConcatIntTy);
+
+ if (Ty != ConcatIntTy) {
+ Worklist.pushValue(Result);
+ Result = Builder.CreateZExt(Result, Ty);
+ }
+
+ if (ShAmtX > 0) {
+ Worklist.pushValue(Result);
+ Result = Builder.CreateShl(Result, ShAmtX);
+ }
+
+ replaceValue(I, *Result);
+ return true;
+}
+
/// Try to convert "shuffle (binop (shuffle, shuffle)), undef"
/// --> "binop (shuffle), (shuffle)".
bool VectorCombine::foldPermuteOfBinops(Instruction &I) {
@@ -2902,6 +3009,9 @@ bool VectorCombine::run() {
if (TryEarlyFoldsOnly)
return;
+ if (I.getType()->isIntegerTy())
+ MadeChange |= foldConcatOfBoolMasks(I);
+
// Otherwise, try folds that improve codegen but may interfere with
// early IR canonicalizations.
// The type checking is for run-time efficiency. We can avoid wasting time
diff --git a/llvm/test/Transforms/PhaseOrdering/X86/concat-boolmasks.ll b/llvm/test/Transforms/PhaseOrdering/X86/concat-boolmasks.ll
index 07bfbffa9518fa..d4c3de664c80af 100644
--- a/llvm/test/Transforms/PhaseOrdering/X86/concat-boolmasks.ll
+++ b/llvm/test/Transforms/PhaseOrdering/X86/concat-boolmasks.ll
@@ -1,20 +1,22 @@
; NOTE: Assertions have been autogenerated by utils/update_test_checks.py
-; RUN: opt < %s -O3 -S -mtriple=x86_64-- -mcpu=x86-64 | FileCheck %s
-; RUN: opt < %s -O3 -S -mtriple=x86_64-- -mcpu=x86-64-v2 | FileCheck %s
-; RUN: opt < %s -O3 -S -mtriple=x86_64-- -mcpu=x86-64-v3 | FileCheck %s
-; RUN: opt < %s -O3 -S -mtriple=x86_64-- -mcpu=x86-64-v4 | FileCheck %s
+; RUN: opt < %s -O3 -S -mtriple=x86_64-- -mcpu=x86-64 | FileCheck %s --check-prefixes=CHECK,SSE
+; RUN: opt < %s -O3 -S -mtriple=x86_64-- -mcpu=x86-64-v2 | FileCheck %s --check-prefixes=CHECK,SSE
+; RUN: opt < %s -O3 -S -mtriple=x86_64-- -mcpu=x86-64-v3 | FileCheck %s --check-prefixes=CHECK,AVX,AVX2
+; RUN: opt < %s -O3 -S -mtriple=x86_64-- -mcpu=x86-64-v4 | FileCheck %s --check-prefixes=CHECK,AVX,AVX512
define i32 @movmsk_i32_v32i8_v16i8(<16 x i8> %v0, <16 x i8> %v1) {
-; CHECK-LABEL: @movmsk_i32_v32i8_v16i8(
-; CHECK-NEXT: [[C0:%.*]] = icmp slt <16 x i8> [[V0:%.*]], zeroinitializer
-; CHECK-NEXT: [[C1:%.*]] = icmp slt <16 x i8> [[V1:%.*]], zeroinitializer
-; CHECK-NEXT: [[B0:%.*]] = bitcast <16 x i1> [[C0]] to i16
-; CHECK-NEXT: [[B1:%.*]] = bitcast <16 x i1> [[C1]] to i16
-; CHECK-NEXT: [[Z0:%.*]] = zext i16 [[B0]] to i32
-; CHECK-NEXT: [[Z1:%.*]] = zext i16 [[B1]] to i32
-; CHECK-NEXT: [[S0:%.*]] = shl nuw i32 [[Z0]], 16
-; CHECK-NEXT: [[OR:%.*]] = or disjoint i32 [[S0]], [[Z1]]
-; CHECK-NEXT: ret i32 [[OR]]
+; SSE-LABEL: @movmsk_i32_v32i8_v16i8(
+; SSE-NEXT: [[TMP1:%.*]] = shufflevector <16 x i8> [[V1:%.*]], <16 x i8> [[V0:%.*]], <32 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 16, i32 17, i32 18, i32 19, i32 20, i32 21, i32 22, i32 23, i32 24, i32 25, i32 26, i32 27, i32 28, i32 29, i32 30, i32 31>
+; SSE-NEXT: [[TMP2:%.*]] = icmp slt <32 x i8> [[TMP1]], zeroinitializer
+; SSE-NEXT: [[OR:%.*]] = bitcast <32 x i1> [[TMP2]] to i32
+; SSE-NEXT: ret i32 [[OR]]
+;
+; AVX-LABEL: @movmsk_i32_v32i8_v16i8(
+; AVX-NEXT: [[C0:%.*]] = icmp slt <16 x i8> [[V0:%.*]], zeroinitializer
+; AVX-NEXT: [[C1:%.*]] = icmp slt <16 x i8> [[V1:%.*]], zeroinitializer
+; AVX-NEXT: [[TMP1:%.*]] = shufflevector <16 x i1> [[C1]], <16 x i1> [[C0]], <32 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 16, i32 17, i32 18, i32 19, i32 20, i32 21, i32 22, i32 23, i32 24, i32 25, i32 26, i32 27, i32 28, i32 29, i32 30, i32 31>
+; AVX-NEXT: [[OR:%.*]] = bitcast <32 x i1> [[TMP1]] to i32
+; AVX-NEXT: ret i32 [[OR]]
;
%c0 = icmp slt <16 x i8> %v0, zeroinitializer
%c1 = icmp slt <16 x i8> %v1, zeroinitializer
@@ -28,16 +30,20 @@ define i32 @movmsk_i32_v32i8_v16i8(<16 x i8> %v0, <16 x i8> %v1) {
}
define i32 @movmsk_i32_v8i32_v4i32(<4 x i32> %v0, <4 x i32> %v1) {
-; CHECK-LABEL: @movmsk_i32_v8i32_v4i32(
-; CHECK-NEXT: [[C0:%.*]] = icmp slt <4 x i32> [[V0:%.*]], zeroinitializer
-; CHECK-NEXT: [[C1:%.*]] = icmp slt <4 x i32> [[V1:%.*]], zeroinitializer
-; CHECK-NEXT: [[B0:%.*]] = bitcast <4 x i1> [[C0]] to i4
-; CHECK-NEXT: [[B1:%.*]] = bitcast <4 x i1> [[C1]] to i4
-; CHECK-NEXT: [[Z0:%.*]] = zext i4 [[B0]] to i32
-; CHECK-NEXT: [[Z1:%.*]] = zext i4 [[B1]] to i32
-; CHECK-NEXT: [[S0:%.*]] = shl nuw nsw i32 [[Z0]], 4
-; CHECK-NEXT: [[OR:%.*]] = or disjoint i32 [[S0]], [[Z1]]
-; CHECK-NEXT: ret i32 [[OR]]
+; SSE-LABEL: @movmsk_i32_v8i32_v4i32(
+; SSE-NEXT: [[TMP1:%.*]] = shufflevector <4 x i32> [[V1:%.*]], <4 x i32> [[V0:%.*]], <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7>
+; SSE-NEXT: [[TMP2:%.*]] = icmp slt <8 x i32> [[TMP1]], zeroinitializer
+; SSE-NEXT: [[TMP3:%.*]] = bitcast <8 x i1> [[TMP2]] to i8
+; SSE-NEXT: [[OR:%.*]] = zext i8 [[TMP3]] to i32
+; SSE-NEXT: ret i32 [[OR]]
+;
+; AVX-LABEL: @movmsk_i32_v8i32_v4i32(
+; AVX-NEXT: [[C0:%.*]] = icmp slt <4 x i32> [[V0:%.*]], zeroinitializer
+; AVX-NEXT: [[C1:%.*]] = icmp slt <4 x i32> [[V1:%.*]], zeroinitializer
+; AVX-NEXT: [[TMP1:%.*]] = shufflevector <4 x i1> [[C1]], <4 x i1> [[C0]], <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7>
+; AVX-NEXT: [[TMP2:%.*]] = bitcast <8 x i1> [[TMP1]] to i8
+; AVX-NEXT: [[OR:%.*]] = zext i8 [[TMP2]] to i32
+; AVX-NEXT: ret i32 [[OR]]
;
%c0 = icmp slt <4 x i32> %v0, zeroinitializer
%c1 = icmp slt <4 x i32> %v1, zeroinitializer
@@ -51,16 +57,20 @@ define i32 @movmsk_i32_v8i32_v4i32(<4 x i32> %v0, <4 x i32> %v1) {
}
define i64 @movmsk_i64_v32i8_v16i8(<16 x i8> %v0, <16 x i8> %v1) {
-; CHECK-LABEL: @movmsk_i64_v32i8_v16i8(
-; CHECK-NEXT: [[C0:%.*]] = icmp slt <16 x i8> [[V0:%.*]], zeroinitializer
-; CHECK-NEXT: [[C1:%.*]] = icmp slt <16 x i8> [[V1:%.*]], zeroinitializer
-; CHECK-NEXT: [[B0:%.*]] = bitcast <16 x i1> [[C0]] to i16
-; CHECK-NEXT: [[B1:%.*]] = bitcast <16 x i1> [[C1]] to i16
-; CHECK-NEXT: [[Z0:%.*]] = zext i16 [[B0]] to i64
-; CHECK-NEXT: [[Z1:%.*]] = zext i16 [[B1]] to i64
-; CHECK-NEXT: [[S0:%.*]] = shl nuw nsw i64 [[Z0]], 16
-; CHECK-NEXT: [[OR:%.*]] = or disjoint i64 [[S0]], [[Z1]]
-; CHECK-NEXT: ret i64 [[OR]]
+; SSE-LABEL: @movmsk_i64_v32i8_v16i8(
+; SSE-NEXT: [[TMP1:%.*]] = shufflevector <16 x i8> [[V1:%.*]], <16 x i8> [[V0:%.*]], <32 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 16, i32 17, i32 18, i32 19, i32 20, i32 21, i32 22, i32 23, i32 24, i32 25, i32 26, i32 27, i32 28, i32 29, i32 30, i32 31>
+; SSE-NEXT: [[TMP2:%.*]] = icmp slt <32 x i8> [[TMP1]], zeroinitializer
+; SSE-NEXT: [[TMP3:%.*]] = bitcast <32 x i1> [[TMP2]] to i32
+; SSE-NEXT: [[OR:%.*]] = zext i32 [[TMP3]] to i64
+; SSE-NEXT: ret i64 [[OR]]
+;
+; AVX-LABEL: @movmsk_i64_v32i8_v16i8(
+; AVX-NEXT: [[C0:%.*]] = icmp slt <16 x i8> [[V0:%.*]], zeroinitializer
+; AVX-NEXT: [[C1:%.*]] = icmp slt <16 x i8> [[V1:%.*]], zeroinitializer
+; AVX-NEXT: [[TMP1:%.*]] = shufflevector <16 x i1> [[C1]], <16 x i1> [[C0]], <32 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 16, i32 17, i32 18, i32 19, i32 20, i32 21, i32 22, i32 23, i32 24, i32 25, i32 26, i32 27, i32 28, i32 29, i32 30, i32 31>
+; AVX-NEXT: [[TMP2:%.*]] = bitcast <32 x i1> [[TMP1]] to i32
+; AVX-NEXT: [[OR:%.*]] = zext i32 [[TMP2]] to i64
+; AVX-NEXT: ret i64 [[OR]]
;
%c0 = icmp slt <16 x i8> %v0, zeroinitializer
%c1 = icmp slt <16 x i8> %v1, zeroinitializer
@@ -74,16 +84,20 @@ define i64 @movmsk_i64_v32i8_v16i8(<16 x i8> %v0, <16 x i8> %v1) {
}
define i64 @movmsk_i64_v8i32_v4i32(<4 x i32> %v0, <4 x i32> %v1) {
-; CHECK-LABEL: @movmsk_i64_v8i32_v4i32(
-; CHECK-NEXT: [[C0:%.*]] = icmp slt <4 x i32> [[V0:%.*]], zeroinitializer
-; CHECK-NEXT: [[C1:%.*]] = icmp slt <4 x i32> [[V1:%.*]], zeroinitializer
-; CHECK-NEXT: [[B0:%.*]] = bitcast <4 x i1> [[C0]] to i4
-; CHECK-NEXT: [[B1:%.*]] = bitcast <4 x i1> [[C1]] to i4
-; CHECK-NEXT: [[Z0:%.*]] = zext i4 [[B0]] to i64
-; CHECK-NEXT: [[Z1:%.*]] = zext i4 [[B1]] to i64
-; CHECK-NEXT: [[S0:%.*]] = shl nuw nsw i64 [[Z0]], 4
-; CHECK-NEXT: [[OR:%.*]] = or disjoint i64 [[S0]], [[Z1]]
-; CHECK-NEXT: ret i64 [[OR]]
+; SSE-LABEL: @movmsk_i64_v8i32_v4i32(
+; SSE-NEXT: [[TMP1:%.*]] = shufflevector <4 x i32> [[V1:%.*]], <4 x i32> [[V0:%.*]], <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7>
+; SSE-NEXT: [[TMP2:%.*]] = icmp slt <8 x i32> [[TMP1]], zeroinitializer
+; SSE-NEXT: [[TMP3:%.*]] = bitcast <8 x i1> [[TMP2]] to i8
+; SSE-NEXT: [[OR:%.*]] = zext i8 [[TMP3]] to i64
+; SSE-NEXT: ret i64 [[OR]]
+;
+; AVX-LABEL: @movmsk_i64_v8i32_v4i32(
+; AVX-NEXT: [[C0:%.*]] = icmp slt <4 x i32> [[V0:%.*]], zeroinitializer
+; AVX-NEXT: [[C1:%.*]] = icmp slt <4 x i32> [[V1:%.*]], zeroinitializer
+; AVX-NEXT: [[TMP1:%.*]] = shufflevector <4 x i1> [[C1]], <4 x i1> [[C0]], <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7>
+; AVX-NEXT: [[TMP2:%.*]] = bitcast <8 x i1> [[TMP1]] to i8
+; AVX-NEXT: [[OR:%.*]] = zext i8 [[TMP2]] to i64
+; AVX-NEXT: ret i64 [[OR]]
;
%c0 = icmp slt <4 x i32> %v0, zeroinitializer
%c1 = icmp slt <4 x i32> %v1, zeroinitializer
@@ -97,26 +111,24 @@ define i64 @movmsk_i64_v8i32_v4i32(<4 x i32> %v0, <4 x i32> %v1) {
}
define i64 @movmsk_i64_v64i8_v16i8(<16 x i8> %v0, <16 x i8> %v1, <16 x i8> %v2, <16 x i8> %v3) {
-; CHECK-LABEL: @movmsk_i64_v64i8_v16i8(
-; CHECK-NEXT: [[C0:%.*]] = icmp slt <16 x i8> [[V0:%.*]], zeroinitializer
-; CHECK-NEXT: [[C1:%.*]] = icmp slt <16 x i8> [[V1:%.*]], zeroinitializer
-; CHECK-NEXT: [[C2:%.*]] = icmp slt <16 x i8> [[V2:%.*]], zeroinitializer
-; CHECK-NEXT: [[C3:%.*]] = icmp slt <16 x i8> [[V3:%.*]], zeroinitializer
-; CHECK-NEXT: [[B0:%.*]] = bitcast <16 x i1> [[C0]] to i16
-; CHECK-NEXT: [[B1:%.*]] = bitcast <16 x i1> [[C1]] to i16
-; CHECK-NEXT: [[B2:%.*]] = bitcast <16 x i1> [[C2]] to i16
-; CHECK-NEXT: [[B3:%.*]] = bitcast <16 x i1> [[C3]] to i16
-; CHECK-NEXT: [[Z0:%.*]] = zext i16 [[B0]] to i64
-; CHECK-NEXT: [[Z1:%.*]] = zext i16 [[B1]] to i64
-; CHECK-NEXT: [[Z2:%.*]] = zext i16 [[B2]] to i64
-; CHECK-NEXT: [[Z3:%.*]] = zext i16 [[B3]] to i64
-; CHECK-NEXT: [[S0:%.*]] = shl nuw i64 [[Z0]], 48
-; CHECK-NEXT: [[S1:%.*]] = shl nuw nsw i64 [[Z1]], 32
-; CHECK-NEXT: [[S2:%.*]] = shl nuw nsw i64 [[Z2]], 16
-; CHECK-NEXT: [[OR0:%.*]] = or disjoint i64 [[S1]], [[S0]]
-; CHECK-NEXT: [[OR1:%.*]] = or disjoint i64 [[S2]], [[Z3]]
-; CHECK-NEXT: [[OR:%.*]] = or disjoint i64 [[OR1]], [[OR0]]
-; CHECK-NEXT: ret i64 [[OR]]
+; SSE-LABEL: @movmsk_i64_v64i8_v16i8(
+; SSE-NEXT: [[TMP1:%.*]] = shufflevector <16 x i8> [[V3:%.*]], <16 x i8> [[V2:%.*]], <32 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 16, i32 17, i32 18, i32 19, i32 20, i32 21, i32 22, i32 23, i32 24, i32 25, i32 26, i32 27, i32 28, i32 29, i32 30, i32 31>
+; SSE-NEXT: [[TMP2:%.*]] = shufflevector <16 x i8> [[V1:%.*]], <16 x i8> [[V0:%.*]], <32 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 16, i32 17, i32 18, i32 19, i32 20, i32 21, i32 22, i32 23, i32 24, i32 25, i32 26, i32 27, i32 28, i32 29, i32 30, i32 31>
+; SSE-NEXT: [[TMP3:%.*]] = shufflevector <32 x i8> [[TMP1]], <32 x i8> [[TMP2]], <64 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 16, i32 17, i32 18, i32 19, i32 20, i32 21, i32 22, i32 23, i32 24, i32 25, i32 26, i32 27, i32 28, i32 29, i32 30, i32 31, i32 32, i32 33, i32 34, i32 35, i32 36, i32 37, i32 38, i32 39, i32 40, i32 41, i32 42, i32 43, i32 44, i32 45, i32 46, i32 47, i32 48, i32 49, i32 50, i32 51, i32 52, i32 53, i32 54, i32 55, i32 56, i32 57, i32 58, i32 59, i32 60, i32 61, i32 62, i32 63>
+; SSE-NEXT: [[TMP4:%.*]] = icmp slt <64 x i8> [[TMP3]], zeroinitializer
+; SSE-NEXT: [[OR:%.*]] = bitcast <64 x i1> [[TMP4]] to i64
+; SSE-NEXT: ret i64 [[OR]]
+;
+; AVX-LABEL: @movmsk_i64_v64i8_v16i8(
+; AVX-NEXT: [[C0:%.*]] = icmp slt <16 x i8> [[V0:%.*]], zeroinitializer
+; AVX-NEXT: [[C1:%.*]] = icmp slt <16 x i8> [[V1:%.*]], zeroinitializer
+; AVX-NEXT: [[C2:%.*]] = icmp slt <16 x i8> [[V2:%.*]], zeroinitializer
+; AVX-NEXT: [[C3:%.*]] = icmp slt <16 x i8> [[V3:%.*]], zeroinitializer
+; AVX-NEXT: [[TMP2:%.*]] = shufflevector <16 x i1> [[C1]], <16 x i1> [[C0]], <32 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 16, i32 17, i32 18, i32 19, i32 20, i32 21, i32 22, i32 23, i32 24, i32 25, i32 26, i32 27, i32 28, i32 29, i32 30, i32 31>
+; AVX-NEXT: [[TMP1:%.*]] = shufflevector <16 x i1> [[C3]], <16 x i1> [[C2]], <32 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 16, i32 17, i32 18, i32 19, i32 20, i32 21, i32 22, i32 23, i32 24, i32 25, i32 26, i32 27, i32 28, i32 29, i32 30, i32 31>
+; AVX-NEXT: [[TMP3:%.*]] = shufflevector <32 x i1> [[TMP1]], <32 x i1> [[TMP2]], <64 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 16, i32 17, i32 18, i32 19, i32 20, i32 21, i32 22, i32 23, i32 24, i32 25, i32 26, i32 27, i32 28, i32 29, i32 30, i32 31, i32 32, i32 33, i32 34, i32 35, i32 36, i32 37, i32 38, i32 39, i32 40, i32 41, i32 42, i32 43, i32 44, i32 45, i32 46, i32 47, i32 48, i32 49, i32 50, i32 51, i32 52, i32 53, i32 54, i32 55, i32 56, i32 57, i32 58, i32 59, i32 60, i32 61, i32 62, i32 63>
+; AVX-NEXT: [[OR:%.*]] = bitcast <64 x i1> [[TMP3]] to i64
+; AVX-NEXT: ret i64 [[OR]]
;
%c0 = icmp slt <16 x i8> %v0, zeroinitializer
%c1 = icmp slt <16 x i8> %v1, zeroinitializer
@@ -140,26 +152,26 @@ define i64 @movmsk_i64_v64i8_v16i8(<16 x i8> %v0, <16 x i8> %v1, <16 x i8> %v2,
}
define i64 @movmsk_i64_v32i32_v4i32(<4 x i32> %v0, <4 x i32> %v1, <4 x i32> %v2, <4 x i32> %v3) {
-; CHECK-LABEL: @movmsk_i64_v32i32_v4i32(
-; CHECK-NEXT: [[C0:%.*]] = icmp slt <4 x i32> [[V0:%.*]], zeroinitializer
-; CHECK-NEXT: [[C1:%.*]] = icmp slt <4 x i32> [[V1:%.*]], zeroinitializer
-; CHECK-NEXT: [[C2:%.*]] = icmp slt <4 x i32> [[V2:%.*]], zeroinitializer
-; CHECK-NEXT: [[C3:%.*]] = icmp slt <4 x i32> [[V3:%.*]], zeroinitializer
-; CHECK-NEXT: [[B0:%.*]] = bitcast <4 x i1> [[C0]] to i4
-; CHECK-NEXT: [[B1:%.*]] = bitcast <4 x i1> [[C1]] to i4
-; CHECK-NEXT: [[B2:%.*]] = bitcast <4 x i1> [[C2]] to i4
-; CHECK-NEXT: [[B3:%.*]] = bitcast <4 x i1> [[C3]] to i4
-; CHECK-NEXT: [[Z0:%.*]] = zext i4 [[B0]] to i64
-; CHECK-NEXT: [[Z1:%.*]] = zext i4 [[B1]] to i64
-; CHECK-NEXT: [[Z2:%.*]] = zext i4 [[B2]] to i64
-; CHECK-NEXT: [[Z3:%.*]] = zext i4 [[B3]] to i64
-; CHECK-NEXT: [[S0:%.*]] = shl nuw nsw i64 [[Z0]], 12
-; CHECK-NEXT: [[S1:%.*]] = shl nuw nsw i64 [[Z1]], 8
-; CHECK-NEXT: [[S2:%.*]] = shl nuw nsw i64 [[Z2]], 4
-; CHECK-NEXT: [[OR0:%.*]] = or disjoint i64 [[S1]], [[S0]]
-; CHECK-NEXT: [[OR1:%.*]] = or disjoint i64 [[S2]], [[Z3]]
-; CHECK-NEXT: [[OR:%.*]] = or disjoint i64 [[OR1]], [[OR0]]
-; CHECK-NEXT: ret i64 [[OR]]
+; SSE-LABEL: @movmsk_i64_v32i32_v4i32(
+; SSE-NEXT: [[TMP1:%.*]] = shufflevector <4 x i32> [[V3:%.*]], <4 x i32> [[V2:%.*]], <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7>
+; SSE-NEXT: [[TMP2:%.*]] = shuf...
[truncated]
|
✅ With the latest revision this PR passed the C/C++ code formatter. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LG
…, C))" -> "(bitcast (concat X, Y))" MOVMSK bool mask style patterns Mask/Bool vectors are often bitcast to/from scalar integers, in particular when concatenating mask results, often this is due to the difficulties of working with vector of bools on C/C++. On x86 this typically involves the MOVMSK/KMOV instructions. To concatenate bool masks, these are typically cast to scalars, which are then zero-extended, shifted and OR'd together. This patch attempts to match these scalar concatenation patterns and convert them to vector shuffles instead. This in turn often assists with further vector combines, depending on the cost model. Fixes llvm#111431
b47ea97
to
8f6b1cc
Compare
Looks like from this patch https://lab.llvm.org/buildbot/#/builders/55/builds/4146 |
Thanks @vitalybuka - I've now reproduced this and am working on a fix. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it worth having some tests that are not in PhaseOrdering too? In case the phase ordering test changes and we don't realise we have lost the test coverage.
Noticed while investigating a crash in #119559 - we don't account for I being replaced and its Type being reallocated. So hoist the checks to the start of the loop.
…, C))" -> "(bitcast (concat X, Y))" MOVMSK bool mask style patterns (#119695) Mask/Bool vectors are often bitcast to/from scalar integers, in particular when concatenating mask results, often this is due to the difficulties of working with vector of bools on C/C++. On x86 this typically involves the MOVMSK/KMOV instructions. To concatenate bool masks, these are typically cast to scalars, which are then zero-extended, shifted and OR'd together. This patch attempts to match these scalar concatenation patterns and convert them to vector shuffles instead. This in turn often assists with further vector combines, depending on the cost model. Reapplied patch from #119559 - fixed use after free issue. Fixes #111431
Mask/Bool vectors are often bitcast to/from scalar integers, in particular when concatenating mask results, often this is due to the difficulties of working with vector of bools on C/C++. On x86 this typically involves the MOVMSK/KMOV instructions.
To concatenate bool masks, these are typically cast to scalars, which are then zero-extended, shifted and OR'd together.
This patch attempts to match these scalar concatenation patterns and convert them to vector shuffles instead. This in turn often assists with further vector combines, depending on the cost model.
Fixes #111431