Skip to content

make fixed-point region inference use a dirty list #47602

Closed
@nikomatsakis

Description

@nikomatsakis

One thing that would likely help with general NLL performance is to improve the fixed-point region inference step. Right now, the region inference is pretty naive:

/// Propagate the region constraints: this will grow the values
/// for each region variable until all the constraints are
/// satisfied. Note that some values may grow **too** large to be
/// feasible, but we check this later.
fn propagate_constraints(&mut self, mir: &Mir<'tcx>) {
let mut changed = true;
debug!("propagate_constraints()");
debug!("propagate_constraints: constraints={:#?}", {
let mut constraints: Vec<_> = self.constraints.iter().collect();
constraints.sort();
constraints
});
// The initial values for each region are derived from the liveness
// constraints we have accumulated.
let mut inferred_values = self.liveness_constraints.clone();
while changed {
changed = false;
debug!("propagate_constraints: --------------------");
for constraint in &self.constraints {
debug!("propagate_constraints: constraint={:?}", constraint);
// Grow the value as needed to accommodate the
// outlives constraint.
let Ok(made_changes) = self.dfs(
mir,
CopyFromSourceToTarget {
source_region: constraint.sub,
target_region: constraint.sup,
inferred_values: &mut inferred_values,
constraint_point: constraint.point,
constraint_span: constraint.span,
},
);
if made_changes {
debug!("propagate_constraints: sub={:?}", constraint.sub);
debug!("propagate_constraints: sup={:?}", constraint.sup);
changed = true;
}
}
debug!("\n");
}
self.inferred_values = Some(inferred_values);
}

Specifically, it does a loop until fixed-point, and for each iteration, it revisits every constraint, every time. Moreover, for each constraint, it performs a DFS. This can be a lot of wasted work, especially if -- in a particular round -- none of the inputs to a given constraint have changed. (i.e., if we have a constraint R1: R2 @ P, and R2 has not changed, then there is no point to repeating the DFS).

To do better, we can make a work list. The idea would be to first walk the constraints and build a dependency map. This map would be keyed by a region variable R and the value would be a list of constraint indices where R appears in the "smaller" side. So, ie., if we had a constraint vector like

vec![
    R1: R2 @ P, // index 0
    R3: R2 @ Q, // index 1
    R4: R5 @ R, // index 2
]

then the map might be like

{
    R2: [0, 1],
    R5: [2],
}

Then we keep a "dirty list" vector for each loop iteration. It starts out containing all constraints. Each round, we pop a constraint from the dirty list. We then do the DFS. If the "superregion" changed, then we grab the dependent constraints from the map and throw them onto the work list. We keep doing this till the work list is empty.

Metadata

Metadata

Assignees

Labels

A-NLLArea: Non-lexical lifetimes (NLL)C-enhancementCategory: An issue proposing an enhancement or a PR with one.I-compiletimeIssue: Problems and improvements with respect to compile times.T-compilerRelevant to the compiler team, which will review and decide on the PR/issue.

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions