-
Notifications
You must be signed in to change notification settings - Fork 13.5k
[AMDGPU] Improve uniform argument handling in InstCombineIntrinsic #105812
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Common up handling of intrinsics that are a no-op on uniform arguments. This catches a couple of new cases: readlane (readlane x, y), z -> readlane x, y (for any z, does not have to equal y). permlane64 (readfirstlane x) -> readfirstlane x (and likewise for any other uniform argument to permlane64).
@llvm/pr-subscribers-llvm-transforms @llvm/pr-subscribers-backend-amdgpu Author: Jay Foad (jayfoad) ChangesCommon up handling of intrinsics that are a no-op on uniform arguments. readlane (readlane x, y), z -> readlane x, y permlane64 (readfirstlane x) -> readfirstlane x Full diff: https://github.com/llvm/llvm-project/pull/105812.diff 2 Files Affected:
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUInstCombineIntrinsic.cpp b/llvm/lib/Target/AMDGPU/AMDGPUInstCombineIntrinsic.cpp
index 9197404309663a..58d1cfae4a10e9 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUInstCombineIntrinsic.cpp
+++ b/llvm/lib/Target/AMDGPU/AMDGPUInstCombineIntrinsic.cpp
@@ -440,6 +440,22 @@ static bool canContractSqrtToRsq(const FPMathOperator *SqrtOp) {
SqrtOp->getType()->isHalfTy();
}
+/// Return true if we can easily prove that use U is uniform.
+static bool isTriviallyUniform(const Use &U) {
+ Value *V = U.get();
+ if (isa<Constant>(V))
+ return true;
+ if (auto *I = dyn_cast<Instruction>(V)) {
+ // If I and U are in different blocks then there is a possibility of
+ // temporal divergence.
+ if (I->getParent() != cast<Instruction>(U.getUser())->getParent())
+ return false;
+ if (const auto *II = dyn_cast<IntrinsicInst>(I))
+ return AMDGPU::isIntrinsicAlwaysUniform(II->getIntrinsicID());
+ }
+ return false;
+}
+
std::optional<Instruction *>
GCNTTIImpl::instCombineIntrinsic(InstCombiner &IC, IntrinsicInst &II) const {
Intrinsic::ID IID = II.getIntrinsicID();
@@ -1060,46 +1076,12 @@ GCNTTIImpl::instCombineIntrinsic(InstCombiner &IC, IntrinsicInst &II) const {
return IC.replaceOperand(II, 0, UndefValue::get(VDstIn->getType()));
}
case Intrinsic::amdgcn_permlane64:
- // A constant value is trivially uniform.
- if (Constant *C = dyn_cast<Constant>(II.getArgOperand(0))) {
- return IC.replaceInstUsesWith(II, C);
- }
- break;
case Intrinsic::amdgcn_readfirstlane:
case Intrinsic::amdgcn_readlane: {
- // A constant value is trivially uniform.
- if (Constant *C = dyn_cast<Constant>(II.getArgOperand(0))) {
- return IC.replaceInstUsesWith(II, C);
- }
-
- // The rest of these may not be safe if the exec may not be the same between
- // the def and use.
- Value *Src = II.getArgOperand(0);
- Instruction *SrcInst = dyn_cast<Instruction>(Src);
- if (SrcInst && SrcInst->getParent() != II.getParent())
- break;
-
- // readfirstlane (readfirstlane x) -> readfirstlane x
- // readlane (readfirstlane x), y -> readfirstlane x
- if (match(Src,
- PatternMatch::m_Intrinsic<Intrinsic::amdgcn_readfirstlane>())) {
- return IC.replaceInstUsesWith(II, Src);
- }
-
- if (IID == Intrinsic::amdgcn_readfirstlane) {
- // readfirstlane (readlane x, y) -> readlane x, y
- if (match(Src, PatternMatch::m_Intrinsic<Intrinsic::amdgcn_readlane>())) {
- return IC.replaceInstUsesWith(II, Src);
- }
- } else {
- // readlane (readlane x, y), y -> readlane x, y
- if (match(Src, PatternMatch::m_Intrinsic<Intrinsic::amdgcn_readlane>(
- PatternMatch::m_Value(),
- PatternMatch::m_Specific(II.getArgOperand(1))))) {
- return IC.replaceInstUsesWith(II, Src);
- }
- }
-
+ // If the first argument is uniform these intrinsics return it unchanged.
+ const Use &Src = II.getArgOperandUse(0);
+ if (isTriviallyUniform(Src))
+ return IC.replaceInstUsesWith(II, Src.get());
break;
}
case Intrinsic::amdgcn_trig_preop: {
diff --git a/llvm/test/Transforms/InstCombine/AMDGPU/amdgcn-intrinsics.ll b/llvm/test/Transforms/InstCombine/AMDGPU/amdgcn-intrinsics.ll
index 9cb79b26448658..f3a3b8c1dc5d8a 100644
--- a/llvm/test/Transforms/InstCombine/AMDGPU/amdgcn-intrinsics.ll
+++ b/llvm/test/Transforms/InstCombine/AMDGPU/amdgcn-intrinsics.ll
@@ -2888,8 +2888,7 @@ define i32 @readlane_idempotent(i32 %arg, i32 %lane) {
define i32 @readlane_idempotent_different_lanes(i32 %arg, i32 %lane0, i32 %lane1) {
; CHECK-LABEL: @readlane_idempotent_different_lanes(
; CHECK-NEXT: [[READ0:%.*]] = call i32 @llvm.amdgcn.readlane.i32(i32 [[ARG:%.*]], i32 [[LANE0:%.*]])
-; CHECK-NEXT: [[READ1:%.*]] = call i32 @llvm.amdgcn.readlane.i32(i32 [[READ0]], i32 [[LANE1:%.*]])
-; CHECK-NEXT: ret i32 [[READ1]]
+; CHECK-NEXT: ret i32 [[READ0]]
;
%read0 = call i32 @llvm.amdgcn.readlane(i32 %arg, i32 %lane0)
%read1 = call i32 @llvm.amdgcn.readlane(i32 %read0, i32 %lane1)
@@ -3061,6 +3060,22 @@ define amdgpu_kernel void @permlanex16_fetch_invalid_bound_ctrl(ptr addrspace(1)
ret void
}
+; --------------------------------------------------------------------
+; llvm.amdgcn.permlane64
+; --------------------------------------------------------------------
+
+define amdgpu_kernel void @permlane64_uniform(ptr addrspace(1) %out, i32 %src0) {
+; CHECK-LABEL: @permlane64_uniform(
+; CHECK-NEXT: [[SRC1:%.*]] = call i32 @llvm.amdgcn.readfirstlane.i32(i32 [[SRC0:%.*]])
+; CHECK-NEXT: store i32 [[SRC1]], ptr addrspace(1) [[OUT:%.*]], align 4
+; CHECK-NEXT: ret void
+;
+ %src1 = call i32 @llvm.amdgcn.readfirstlane(i32 %src0)
+ %res = call i32 @llvm.amdgcn.permlane64(i32 %src1)
+ store i32 %res, ptr addrspace(1) %out
+ ret void
+}
+
; --------------------------------------------------------------------
; llvm.amdgcn.image.sample a16
; --------------------------------------------------------------------
|
See https://reviews.llvm.org/D63431 for the original implementation of the readlane/readfirstlane folds. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, with an optional nit. I have been mulling over a similar idea. I would like to initiate some effort on a combiner pass that uses UniformityAnalysis in an AMD-specific way. I expect we would need to find one or more strategic places to run it ... it can benefit and also benefit from optimizations that transform the CFG.
// temporal divergence. | ||
if (I->getParent() != cast<Instruction>(U.getUser())->getParent()) | ||
return false; | ||
if (const auto *II = dyn_cast<IntrinsicInst>(I)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be slightly faster if we checked this first, before checking whether it's the same block? I mean, return false if it is not a uniform intrinsic? If the dyn_cast to IntrinsicInst were the outer condition, then would also really bring out the fact that we are doing a trivial check on uniform intrinsics only, and not instructions in general.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, fair enough. I was trying to leave the door open for extending this code to handle other Instruction
types, but I don't actually have a use case in mind for that.
LLVM Buildbot has detected a new failure on builder Full details are available at: https://lab.llvm.org/buildbot/#/builders/30/builds/4664 Here is the relevant piece of the build log for the reference:
|
…lvm#105812) Common up handling of intrinsics that are a no-op on uniform arguments. This catches a couple of new cases: readlane (readlane x, y), z -> readlane x, y (for any z, does not have to equal y). permlane64 (readfirstlane x) -> readfirstlane x (and likewise for any other uniform argument to permlane64).
Common up handling of intrinsics that are a no-op on uniform arguments.
This catches a couple of new cases:
readlane (readlane x, y), z -> readlane x, y
(for any z, does not have to equal y).
permlane64 (readfirstlane x) -> readfirstlane x
(and likewise for any other uniform argument to permlane64).