-
Notifications
You must be signed in to change notification settings - Fork 13.7k
[LoopVectorize] Vectorize fixed-order recurrence with vscale x 1. #142772
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -6139,11 +6139,6 @@ LoopVectorizationCostModel::getInstructionCost(Instruction *I, | |
|
||
// First-order recurrences are replaced by vector shuffles inside the loop. | ||
if (VF.isVector() && Legal->isFixedOrderRecurrence(Phi)) { | ||
// For <vscale x 1 x i64>, if vscale = 1 we are unable to extract the | ||
// penultimate value of the recurrence. | ||
// TODO: Consider vscale_range info. | ||
if (VF.isScalable() && VF.getKnownMinValue() == 1) | ||
return InstructionCost::getInvalid(); | ||
SmallVector<int> Mask(VF.getKnownMinValue()); | ||
std::iota(Mask.begin(), Mask.end(), VF.getKnownMinValue() - 1); | ||
return TTI.getShuffleCost(TargetTransformInfo::SK_Splice, | ||
|
@@ -8561,13 +8556,17 @@ addUsersInExitBlocks(VPlan &Plan, | |
/// users in the original exit block using the VPIRInstruction wrapping to the | ||
/// LCSSA phi. | ||
static void addExitUsersForFirstOrderRecurrences( | ||
VPlan &Plan, SetVector<VPIRInstruction *> &ExitUsersToFix) { | ||
VPlan &Plan, SetVector<VPIRInstruction *> &ExitUsersToFix, VFRange &Range) { | ||
VPRegionBlock *VectorRegion = Plan.getVectorLoopRegion(); | ||
auto *ScalarPHVPBB = Plan.getScalarPreheader(); | ||
auto *MiddleVPBB = Plan.getMiddleBlock(); | ||
VPBuilder ScalarPHBuilder(ScalarPHVPBB); | ||
VPBuilder MiddleBuilder(MiddleVPBB, MiddleVPBB->getFirstNonPhi()); | ||
|
||
auto IsScalableOne = [](ElementCount VF) -> bool { | ||
return VF == ElementCount::getScalable(1); | ||
}; | ||
|
||
for (auto &HeaderPhi : VectorRegion->getEntryBasicBlock()->phis()) { | ||
auto *FOR = dyn_cast<VPFirstOrderRecurrencePHIRecipe>(&HeaderPhi); | ||
if (!FOR) | ||
|
@@ -8649,6 +8648,13 @@ static void addExitUsersForFirstOrderRecurrences( | |
for (VPIRInstruction *ExitIRI : ExitUsersToFix) { | ||
if (ExitIRI->getOperand(0) != FOR) | ||
continue; | ||
// For VF vscale x 1, if vscale = 1, we are unable to extract the | ||
// penultimate value of the recurrence. Instead, we can extract the last | ||
// element directly from VPInstruction::FirstOrderRecurrenceSplice. | ||
Comment on lines
+8651
to
+8653
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sounds somewhat similar to noted (exactly a year ago) in As noted in #137030 (comment), There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Yes, it will work well for all target. We're trying to enable tail folding (specifically EVL tail folding) when a fixed-order recurrence (FOR) phi is live-out from the loop. In this case, all live-out values of the FOR phi must be obtained by extracting the last active lane from the recurrence phi.
Yes, I also left a TODO comment for UF because we currently don’t have a method to clamp the VPlan for UF. |
||
// TODO: Consider vscale_range info and UF. | ||
if (LoopVectorizationPlanner::getDecisionAndClampRange(IsScalableOne, | ||
Range)) | ||
return; | ||
VPValue *PenultimateElement = MiddleBuilder.createNaryOp( | ||
VPInstruction::ExtractPenultimateElement, {FOR->getBackedgeValue()}, | ||
{}, "vector.recur.extract.for.phi"); | ||
|
@@ -8864,7 +8870,7 @@ VPlanPtr LoopVectorizationPlanner::tryToBuildVPlanWithVPRecipes( | |
addScalarResumePhis(RecipeBuilder, *Plan, IVEndValues); | ||
SetVector<VPIRInstruction *> ExitUsersToFix = | ||
collectUsersInLatchExitBlock(*Plan); | ||
addExitUsersForFirstOrderRecurrences(*Plan, ExitUsersToFix); | ||
addExitUsersForFirstOrderRecurrences(*Plan, ExitUsersToFix, Range); | ||
addUsersInExitBlocks(*Plan, ExitUsersToFix); | ||
|
||
// --------------------------------------------------------------------------- | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -3850,9 +3850,6 @@ VPFirstOrderRecurrencePHIRecipe::computeCost(ElementCount VF, | |
if (VF.isScalar()) | ||
return Ctx.TTI.getCFInstrCost(Instruction::PHI, Ctx.CostKind); | ||
|
||
if (VF.isScalable() && VF.getKnownMinValue() == 1) | ||
return InstructionCost::getInvalid(); | ||
|
||
Comment on lines
-3853
to
-3855
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Would it be possible to add a target-specific test case, where the cost model chooses vscale x 1, to make sure all cost model code paths are exercised? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sure, I think you're referring to this. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Perhaps I'm wrong, but I think @fhahn means a test where we don't specify -force-vector-width=1 and the cost model ends up choosing There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I tried it, but without specifying VF, the current upstream RISC-V cost model will select vscale x 2 for this case. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hmm, I guess it's tricky to write a test for this. I think one way of doing it is to introduce a possible dependency between reads and writes in the loop such that we are forced to limit the VF to vscale x 1 using a vscale_range of (1,16) or something like that. The only problem then would be the vectoriser may just prefer a fixed-width VF instead, i.e. VF=4. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Perhaps you could have a loop with a combination of gathers and normal loads/stores with a dependency issue that limits the VF to vscale x 1. Requiring gather loads may then favour scalable vectors? |
||
return 0; | ||
} | ||
|
||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -8,17 +8,51 @@ define i64 @pr97452_scalable_vf1_for_live_out(ptr %src) { | |
; CHECK-LABEL: define i64 @pr97452_scalable_vf1_for_live_out( | ||
; CHECK-SAME: ptr [[SRC:%.*]]) { | ||
; CHECK-NEXT: [[ENTRY:.*]]: | ||
; CHECK-NEXT: [[TMP0:%.*]] = call i64 @llvm.vscale.i64() | ||
; CHECK-NEXT: [[MIN_ITERS_CHECK:%.*]] = icmp ult i64 23, [[TMP0]] | ||
; CHECK-NEXT: br i1 [[MIN_ITERS_CHECK]], label %[[SCALAR_PH:.*]], label %[[VECTOR_PH:.*]] | ||
; CHECK: [[VECTOR_PH]]: | ||
; CHECK-NEXT: [[TMP1:%.*]] = call i64 @llvm.vscale.i64() | ||
; CHECK-NEXT: [[N_MOD_VF:%.*]] = urem i64 23, [[TMP1]] | ||
; CHECK-NEXT: [[N_VEC:%.*]] = sub i64 23, [[N_MOD_VF]] | ||
; CHECK-NEXT: [[TMP2:%.*]] = call i64 @llvm.vscale.i64() | ||
; CHECK-NEXT: [[TMP3:%.*]] = call i32 @llvm.vscale.i32() | ||
; CHECK-NEXT: [[TMP4:%.*]] = sub i32 [[TMP3]], 1 | ||
; CHECK-NEXT: [[VECTOR_RECUR_INIT:%.*]] = insertelement <vscale x 1 x i64> poison, i64 0, i32 [[TMP4]] | ||
; CHECK-NEXT: br label %[[VECTOR_BODY:.*]] | ||
; CHECK: [[VECTOR_BODY]]: | ||
; CHECK-NEXT: [[INDEX:%.*]] = phi i64 [ 0, %[[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], %[[VECTOR_BODY]] ] | ||
; CHECK-NEXT: [[VECTOR_RECUR:%.*]] = phi <vscale x 1 x i64> [ [[VECTOR_RECUR_INIT]], %[[VECTOR_PH]] ], [ [[WIDE_LOAD:%.*]], %[[VECTOR_BODY]] ] | ||
; CHECK-NEXT: [[TMP5:%.*]] = getelementptr inbounds i64, ptr [[SRC]], i64 [[INDEX]] | ||
; CHECK-NEXT: [[TMP6:%.*]] = getelementptr inbounds i64, ptr [[TMP5]], i32 0 | ||
; CHECK-NEXT: [[WIDE_LOAD]] = load <vscale x 1 x i64>, ptr [[TMP6]], align 8 | ||
; CHECK-NEXT: [[TMP7:%.*]] = call <vscale x 1 x i64> @llvm.vector.splice.nxv1i64(<vscale x 1 x i64> [[VECTOR_RECUR]], <vscale x 1 x i64> [[WIDE_LOAD]], i32 -1) | ||
; CHECK-NEXT: [[INDEX_NEXT]] = add nuw i64 [[INDEX]], [[TMP2]] | ||
; CHECK-NEXT: [[TMP8:%.*]] = icmp eq i64 [[INDEX_NEXT]], [[N_VEC]] | ||
; CHECK-NEXT: br i1 [[TMP8]], label %[[MIDDLE_BLOCK:.*]], label %[[VECTOR_BODY]], !llvm.loop [[LOOP0:![0-9]+]] | ||
; CHECK: [[MIDDLE_BLOCK]]: | ||
; CHECK-NEXT: [[TMP9:%.*]] = call i32 @llvm.vscale.i32() | ||
; CHECK-NEXT: [[TMP10:%.*]] = sub i32 [[TMP9]], 1 | ||
; CHECK-NEXT: [[TMP11:%.*]] = extractelement <vscale x 1 x i64> [[TMP7]], i32 [[TMP10]] | ||
; CHECK-NEXT: [[TMP12:%.*]] = call i32 @llvm.vscale.i32() | ||
; CHECK-NEXT: [[TMP13:%.*]] = sub i32 [[TMP12]], 1 | ||
; CHECK-NEXT: [[VECTOR_RECUR_EXTRACT:%.*]] = extractelement <vscale x 1 x i64> [[WIDE_LOAD]], i32 [[TMP13]] | ||
; CHECK-NEXT: [[CMP_N:%.*]] = icmp eq i64 23, [[N_VEC]] | ||
; CHECK-NEXT: br i1 [[CMP_N]], label %[[EXIT:.*]], label %[[SCALAR_PH]] | ||
; CHECK: [[SCALAR_PH]]: | ||
; CHECK-NEXT: [[SCALAR_RECUR_INIT:%.*]] = phi i64 [ [[VECTOR_RECUR_EXTRACT]], %[[MIDDLE_BLOCK]] ], [ 0, %[[ENTRY]] ] | ||
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ [[N_VEC]], %[[MIDDLE_BLOCK]] ], [ 0, %[[ENTRY]] ] | ||
; CHECK-NEXT: br label %[[LOOP:.*]] | ||
; CHECK: [[LOOP]]: | ||
; CHECK-NEXT: [[FOR:%.*]] = phi i64 [ 0, %[[ENTRY]] ], [ [[L:%.*]], %[[LOOP]] ] | ||
; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[ENTRY]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ] | ||
; CHECK-NEXT: [[FOR:%.*]] = phi i64 [ [[SCALAR_RECUR_INIT]], %[[SCALAR_PH]] ], [ [[L:%.*]], %[[LOOP]] ] | ||
; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ] | ||
; CHECK-NEXT: [[IV_NEXT]] = add i64 [[IV]], 1 | ||
; CHECK-NEXT: [[GEP:%.*]] = getelementptr inbounds i64, ptr [[SRC]], i64 [[IV]] | ||
; CHECK-NEXT: [[L]] = load i64, ptr [[GEP]], align 8 | ||
; CHECK-NEXT: [[EC:%.*]] = icmp eq i64 [[IV]], 22 | ||
; CHECK-NEXT: br i1 [[EC]], label %[[EXIT:.*]], label %[[LOOP]] | ||
; CHECK-NEXT: br i1 [[EC]], label %[[EXIT]], label %[[LOOP]], !llvm.loop [[LOOP3:![0-9]+]] | ||
; CHECK: [[EXIT]]: | ||
; CHECK-NEXT: [[RES:%.*]] = phi i64 [ [[FOR]], %[[LOOP]] ] | ||
; CHECK-NEXT: [[RES:%.*]] = phi i64 [ [[FOR]], %[[LOOP]] ], [ [[TMP11]], %[[MIDDLE_BLOCK]] ] | ||
; CHECK-NEXT: ret i64 [[RES]] | ||
; | ||
entry: | ||
|
@@ -43,17 +77,51 @@ define void @pr97452_scalable_vf1_for_no_live_out(ptr %src, ptr noalias %dst) { | |
; CHECK-LABEL: define void @pr97452_scalable_vf1_for_no_live_out( | ||
; CHECK-SAME: ptr [[SRC:%.*]], ptr noalias [[DST:%.*]]) { | ||
; CHECK-NEXT: [[ENTRY:.*]]: | ||
; CHECK-NEXT: [[TMP0:%.*]] = call i64 @llvm.vscale.i64() | ||
; CHECK-NEXT: [[MIN_ITERS_CHECK:%.*]] = icmp ult i64 23, [[TMP0]] | ||
; CHECK-NEXT: br i1 [[MIN_ITERS_CHECK]], label %[[SCALAR_PH:.*]], label %[[VECTOR_PH:.*]] | ||
; CHECK: [[VECTOR_PH]]: | ||
; CHECK-NEXT: [[TMP1:%.*]] = call i64 @llvm.vscale.i64() | ||
; CHECK-NEXT: [[N_MOD_VF:%.*]] = urem i64 23, [[TMP1]] | ||
; CHECK-NEXT: [[N_VEC:%.*]] = sub i64 23, [[N_MOD_VF]] | ||
; CHECK-NEXT: [[TMP2:%.*]] = call i64 @llvm.vscale.i64() | ||
; CHECK-NEXT: [[TMP3:%.*]] = call i32 @llvm.vscale.i32() | ||
; CHECK-NEXT: [[TMP4:%.*]] = sub i32 [[TMP3]], 1 | ||
; CHECK-NEXT: [[VECTOR_RECUR_INIT:%.*]] = insertelement <vscale x 1 x i64> poison, i64 0, i32 [[TMP4]] | ||
; CHECK-NEXT: br label %[[VECTOR_BODY:.*]] | ||
; CHECK: [[VECTOR_BODY]]: | ||
; CHECK-NEXT: [[INDEX:%.*]] = phi i64 [ 0, %[[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], %[[VECTOR_BODY]] ] | ||
; CHECK-NEXT: [[VECTOR_RECUR:%.*]] = phi <vscale x 1 x i64> [ [[VECTOR_RECUR_INIT]], %[[VECTOR_PH]] ], [ [[WIDE_LOAD:%.*]], %[[VECTOR_BODY]] ] | ||
; CHECK-NEXT: [[TMP5:%.*]] = getelementptr inbounds i64, ptr [[SRC]], i64 [[INDEX]] | ||
; CHECK-NEXT: [[TMP6:%.*]] = getelementptr inbounds i64, ptr [[TMP5]], i32 0 | ||
; CHECK-NEXT: [[WIDE_LOAD]] = load <vscale x 1 x i64>, ptr [[TMP6]], align 8 | ||
; CHECK-NEXT: [[TMP7:%.*]] = call <vscale x 1 x i64> @llvm.vector.splice.nxv1i64(<vscale x 1 x i64> [[VECTOR_RECUR]], <vscale x 1 x i64> [[WIDE_LOAD]], i32 -1) | ||
; CHECK-NEXT: [[TMP8:%.*]] = getelementptr inbounds i64, ptr [[DST]], i64 [[INDEX]] | ||
; CHECK-NEXT: [[TMP9:%.*]] = getelementptr inbounds i64, ptr [[TMP8]], i32 0 | ||
; CHECK-NEXT: store <vscale x 1 x i64> [[TMP7]], ptr [[TMP9]], align 8 | ||
; CHECK-NEXT: [[INDEX_NEXT]] = add nuw i64 [[INDEX]], [[TMP2]] | ||
; CHECK-NEXT: [[TMP10:%.*]] = icmp eq i64 [[INDEX_NEXT]], [[N_VEC]] | ||
; CHECK-NEXT: br i1 [[TMP10]], label %[[MIDDLE_BLOCK:.*]], label %[[VECTOR_BODY]], !llvm.loop [[LOOP4:![0-9]+]] | ||
; CHECK: [[MIDDLE_BLOCK]]: | ||
; CHECK-NEXT: [[TMP11:%.*]] = call i32 @llvm.vscale.i32() | ||
; CHECK-NEXT: [[TMP12:%.*]] = sub i32 [[TMP11]], 1 | ||
; CHECK-NEXT: [[VECTOR_RECUR_EXTRACT:%.*]] = extractelement <vscale x 1 x i64> [[WIDE_LOAD]], i32 [[TMP12]] | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is just an observation really, but it seems a shame we generate this code despite there being no users of There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The user is resume phi:
|
||
; CHECK-NEXT: [[CMP_N:%.*]] = icmp eq i64 23, [[N_VEC]] | ||
; CHECK-NEXT: br i1 [[CMP_N]], label %[[EXIT:.*]], label %[[SCALAR_PH]] | ||
; CHECK: [[SCALAR_PH]]: | ||
; CHECK-NEXT: [[SCALAR_RECUR_INIT:%.*]] = phi i64 [ [[VECTOR_RECUR_EXTRACT]], %[[MIDDLE_BLOCK]] ], [ 0, %[[ENTRY]] ] | ||
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ [[N_VEC]], %[[MIDDLE_BLOCK]] ], [ 0, %[[ENTRY]] ] | ||
; CHECK-NEXT: br label %[[LOOP:.*]] | ||
; CHECK: [[LOOP]]: | ||
; CHECK-NEXT: [[FOR:%.*]] = phi i64 [ 0, %[[ENTRY]] ], [ [[L:%.*]], %[[LOOP]] ] | ||
; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[ENTRY]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ] | ||
; CHECK-NEXT: [[FOR:%.*]] = phi i64 [ [[SCALAR_RECUR_INIT]], %[[SCALAR_PH]] ], [ [[L:%.*]], %[[LOOP]] ] | ||
; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ] | ||
; CHECK-NEXT: [[IV_NEXT]] = add i64 [[IV]], 1 | ||
; CHECK-NEXT: [[GEP:%.*]] = getelementptr inbounds i64, ptr [[SRC]], i64 [[IV]] | ||
; CHECK-NEXT: [[L]] = load i64, ptr [[GEP]], align 8 | ||
; CHECK-NEXT: [[GEP_DST:%.*]] = getelementptr inbounds i64, ptr [[DST]], i64 [[IV]] | ||
; CHECK-NEXT: store i64 [[FOR]], ptr [[GEP_DST]], align 8 | ||
; CHECK-NEXT: [[EC:%.*]] = icmp eq i64 [[IV]], 22 | ||
; CHECK-NEXT: br i1 [[EC]], label %[[EXIT:.*]], label %[[LOOP]] | ||
; CHECK-NEXT: br i1 [[EC]], label %[[EXIT]], label %[[LOOP]], !llvm.loop [[LOOP5:![0-9]+]] | ||
; CHECK: [[EXIT]]: | ||
; CHECK-NEXT: ret void | ||
; | ||
|
@@ -74,3 +142,11 @@ loop: | |
exit: | ||
ret void | ||
} | ||
;. | ||
; CHECK: [[LOOP0]] = distinct !{[[LOOP0]], [[META1:![0-9]+]], [[META2:![0-9]+]]} | ||
; CHECK: [[META1]] = !{!"llvm.loop.isvectorized", i32 1} | ||
; CHECK: [[META2]] = !{!"llvm.loop.unroll.runtime.disable"} | ||
; CHECK: [[LOOP3]] = distinct !{[[LOOP3]], [[META2]], [[META1]]} | ||
; CHECK: [[LOOP4]] = distinct !{[[LOOP4]], [[META1]], [[META2]]} | ||
; CHECK: [[LOOP5]] = distinct !{[[LOOP5]], [[META2]], [[META1]]} | ||
;. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this imply that
VF.isVector()
returns true for single-element non-vector VF's when VF.isScalable() && VF.getKnownMinValue() == 1 and vscale == 1? The TODO is erased w/o considering vscale_range info?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
47c9260
This TODO should be kept.
Mel-Chen@d63194a
I tried tightening the condition for clamping the VF range, but it seems to get stuck on the design of ExtractLastElement and VPLane.
I removed the assertion as work-around way. While it appears to produce correct IR, I believe there must be a better solution.