Skip to content

Miri/CTFE discriminant computation happens at the wrong type #62138

Closed
@RalfJung

Description

@RalfJung

When loading discriminants of an enum that uses niche optimizations, Miri/CTFE has to do some arithmetic:

let adjusted_discr = raw_discr.wrapping_sub(niche_start)
.wrapping_add(variants_start);

Currently, this happens on type u128. That's probably wrong, it should happen on the type of the discriminant. That will result in different overflow behavior.

It's just addition and subtraction, so signed vs unsigned does not matter and we could just mask off the "too high" bits after each operation, as in:

let mask = !0u128 >> (128 - bits);
let start = *self.valid_range.start();
let end = *self.valid_range.end();
assert_eq!(start, start & mask);
assert_eq!(end, end & mask);
start..(end.wrapping_add(1) & mask)

However, it might be more elegant to use our binary_*op methods in operator.rs.

Cc @oli-obk @eddyb

Metadata

Metadata

Assignees

No one assigned

    Labels

    A-const-evalArea: Constant evaluation, covers all const contexts (static, const fn, ...)A-miriArea: The miri toolC-bugCategory: This is a bug.T-compilerRelevant to the compiler team, which will review and decide on the PR/issue.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions