Description
This example ICEs:
#![feature(nll)]
#![feature(impl_trait_in_bindings)]
use std::fmt::Debug;
fn main() {
let x: Option<impl Debug> = Some(44_u32);
println!("{:?}", x);
}
The reason is that the code which chooses when to "reveal" opaque types looks only for opaque types at the top level (and, oddly, it does so only if ordinary unification fails; having code that branches based on whether unification succeeded is in general a bad idea, so we should fix that too):
rust/src/librustc_mir/borrow_check/nll/type_check/mod.rs
Lines 893 to 899 in c7df1f5
More deeply, though, this code is taking a somewhat flawed approach. In particular, it is looking at the results of inference, but -- at least presently -- opaque types can end up in the inferred type in one of two ways. (As shown by #54593.)
For example, one might have a mixture of opaque types that came from a recursive call (and which ought not to be revealed) and opaque types that the user wrote:
#![feature(nll)]
#![feature(impl_trait_in_bindings)]
use std::fmt::Debug;
fn foo() -> impl Copy {
if false {
let x: (_, impl Debug) = (foo(), 22);
}
()
}
fn main() { }
The correct way to do this, I suspect, is to leverage the user-type annotations that we are tracking for NLL.