Skip to content

[AMDGPU] Improve uniform argument handling in InstCombineIntrinsic #105812

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Aug 23, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
58 changes: 20 additions & 38 deletions llvm/lib/Target/AMDGPU/AMDGPUInstCombineIntrinsic.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -440,6 +440,22 @@ static bool canContractSqrtToRsq(const FPMathOperator *SqrtOp) {
SqrtOp->getType()->isHalfTy();
}

/// Return true if we can easily prove that use U is uniform.
static bool isTriviallyUniform(const Use &U) {
Value *V = U.get();
if (isa<Constant>(V))
return true;
if (auto *I = dyn_cast<Instruction>(V)) {
// If I and U are in different blocks then there is a possibility of
// temporal divergence.
if (I->getParent() != cast<Instruction>(U.getUser())->getParent())
return false;
if (const auto *II = dyn_cast<IntrinsicInst>(I))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be slightly faster if we checked this first, before checking whether it's the same block? I mean, return false if it is not a uniform intrinsic? If the dyn_cast to IntrinsicInst were the outer condition, then would also really bring out the fact that we are doing a trivial check on uniform intrinsics only, and not instructions in general.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, fair enough. I was trying to leave the door open for extending this code to handle other Instruction types, but I don't actually have a use case in mind for that.

return AMDGPU::isIntrinsicAlwaysUniform(II->getIntrinsicID());
}
return false;
}

std::optional<Instruction *>
GCNTTIImpl::instCombineIntrinsic(InstCombiner &IC, IntrinsicInst &II) const {
Intrinsic::ID IID = II.getIntrinsicID();
Expand Down Expand Up @@ -1060,46 +1076,12 @@ GCNTTIImpl::instCombineIntrinsic(InstCombiner &IC, IntrinsicInst &II) const {
return IC.replaceOperand(II, 0, UndefValue::get(VDstIn->getType()));
}
case Intrinsic::amdgcn_permlane64:
// A constant value is trivially uniform.
if (Constant *C = dyn_cast<Constant>(II.getArgOperand(0))) {
return IC.replaceInstUsesWith(II, C);
}
break;
case Intrinsic::amdgcn_readfirstlane:
case Intrinsic::amdgcn_readlane: {
// A constant value is trivially uniform.
if (Constant *C = dyn_cast<Constant>(II.getArgOperand(0))) {
return IC.replaceInstUsesWith(II, C);
}

// The rest of these may not be safe if the exec may not be the same between
// the def and use.
Value *Src = II.getArgOperand(0);
Instruction *SrcInst = dyn_cast<Instruction>(Src);
if (SrcInst && SrcInst->getParent() != II.getParent())
break;

// readfirstlane (readfirstlane x) -> readfirstlane x
// readlane (readfirstlane x), y -> readfirstlane x
if (match(Src,
PatternMatch::m_Intrinsic<Intrinsic::amdgcn_readfirstlane>())) {
return IC.replaceInstUsesWith(II, Src);
}

if (IID == Intrinsic::amdgcn_readfirstlane) {
// readfirstlane (readlane x, y) -> readlane x, y
if (match(Src, PatternMatch::m_Intrinsic<Intrinsic::amdgcn_readlane>())) {
return IC.replaceInstUsesWith(II, Src);
}
} else {
// readlane (readlane x, y), y -> readlane x, y
if (match(Src, PatternMatch::m_Intrinsic<Intrinsic::amdgcn_readlane>(
PatternMatch::m_Value(),
PatternMatch::m_Specific(II.getArgOperand(1))))) {
return IC.replaceInstUsesWith(II, Src);
}
}

// If the first argument is uniform these intrinsics return it unchanged.
const Use &Src = II.getArgOperandUse(0);
if (isTriviallyUniform(Src))
return IC.replaceInstUsesWith(II, Src.get());
break;
}
case Intrinsic::amdgcn_trig_preop: {
Expand Down
19 changes: 17 additions & 2 deletions llvm/test/Transforms/InstCombine/AMDGPU/amdgcn-intrinsics.ll
Original file line number Diff line number Diff line change
Expand Up @@ -2888,8 +2888,7 @@ define i32 @readlane_idempotent(i32 %arg, i32 %lane) {
define i32 @readlane_idempotent_different_lanes(i32 %arg, i32 %lane0, i32 %lane1) {
; CHECK-LABEL: @readlane_idempotent_different_lanes(
; CHECK-NEXT: [[READ0:%.*]] = call i32 @llvm.amdgcn.readlane.i32(i32 [[ARG:%.*]], i32 [[LANE0:%.*]])
; CHECK-NEXT: [[READ1:%.*]] = call i32 @llvm.amdgcn.readlane.i32(i32 [[READ0]], i32 [[LANE1:%.*]])
; CHECK-NEXT: ret i32 [[READ1]]
; CHECK-NEXT: ret i32 [[READ0]]
;
%read0 = call i32 @llvm.amdgcn.readlane(i32 %arg, i32 %lane0)
%read1 = call i32 @llvm.amdgcn.readlane(i32 %read0, i32 %lane1)
Expand Down Expand Up @@ -3061,6 +3060,22 @@ define amdgpu_kernel void @permlanex16_fetch_invalid_bound_ctrl(ptr addrspace(1)
ret void
}

; --------------------------------------------------------------------
; llvm.amdgcn.permlane64
; --------------------------------------------------------------------

define amdgpu_kernel void @permlane64_uniform(ptr addrspace(1) %out, i32 %src0) {
; CHECK-LABEL: @permlane64_uniform(
; CHECK-NEXT: [[SRC1:%.*]] = call i32 @llvm.amdgcn.readfirstlane.i32(i32 [[SRC0:%.*]])
; CHECK-NEXT: store i32 [[SRC1]], ptr addrspace(1) [[OUT:%.*]], align 4
; CHECK-NEXT: ret void
;
%src1 = call i32 @llvm.amdgcn.readfirstlane(i32 %src0)
%res = call i32 @llvm.amdgcn.permlane64(i32 %src1)
store i32 %res, ptr addrspace(1) %out
ret void
}

; --------------------------------------------------------------------
; llvm.amdgcn.image.sample a16
; --------------------------------------------------------------------
Expand Down
Loading