Description
Bugzilla Link | 37215 |
Version | trunk |
OS | Linux |
CC | @kcc,@pcc |
Extended Description
Currently, llvm-cfi-verify's logic for determining whether a branch is protected is fairly simple: it looks backwards from an indirect call and verifies that every code path leading to that call has a branch to a ud2 instruction. One of the sanity checks that llvm-cfi-verify puts in place is that it checks that the register could not have been modified between the branch to ud2 and the call; however, depending on instruction ordering this can happen when loading a pointer from a vtable. For example, the following is fine:
$ cat bug.S
.global main
main:
mov (%rax), %rbx
ja fail
call *%rbx
fail:
ud2
$ clang -g -o bug bug.S && llvm-cfi-verify bug
Instruction: 0x2010dd (PROTECTED): callq *%rbx
0x2010dd = bug.S:7:0 (frame_dummy)
Total Indirect CF Instructions: 1
Expected Protected: 1 (100.00%)
Unexpected Protected: 0 (0.00%)
Expected Unprotected: 0 (0.00%)
Unexpected Unprotected (BAD): 0 (0.00%)
However, the following fails:
$ cat bug.S
.global main
main:
ja fail
mov (%rax), %rbx
call *%rbx
fail:
ud2
$ clang -g -o bug bug.S && llvm-cfi-verify bug
Instruction: 0x2010dd (FAIL_REGISTER_CLOBBERED): callq *%rbx
0x2010dd = bug.S:7:0 (frame_dummy)
Total Indirect CF Instructions: 1
Expected Protected: 0 (0.00%)
Unexpected Protected: 0 (0.00%)
Expected Unprotected: 0 (0.00%)
Unexpected Unprotected (BAD): 1 (100.00%)
CLOBBERED failures account for a quarter of Unexpected Unprotected failures in Chrome. It's unclear to me if there's a simple way to fix this, to do it correctly we would need to look backwards at the whole comparison logic and ensure that the only allowed CLOBBER is derived from a memory load from register checked in the comparison. As a simpler compromise, we could fix it by ensuring that there is only a single allowed clobber, and that it is an explicit load from memory using a register as the base address.